It seems like it was only a matter of time. In 2019, Apple finally gave in to vocal requests for a PC-like modular professional tower system to replace its idiosyncratic 2013-era trash-can Mac Pro, but it never really felt like the company’s heart was in the new, somewhat more traditional model. Apple last refreshed it in 2023 with an M2 Ultra processor, after which it simply began the process of slowly vanishing. On Thursday, the company confirmed to 9to5Mac that it had been discontinued and wouldn’t be replaced.

I reached out to Apple for more information, but didn’t immediately hear back.

Part of the Mac Pro’s problem was that the intent didn’t fit comfortably into Apple’s system-on-chip strategy for its M-series processors. As I commented at the time, “One irony of the M2 Ultra upgrade, though, is that Apple has essentially made the Mac Pro less modular, which was the reason everyone clamored for it to begin with.” 

SoCs integrate the memory and GPU on-chip, and the whole point of workstation-class modularity is the ability to upgrade the memory and GPU, as well as link multiple GPUs: The chips don’t support discrete GPUs, either. And with all the slots essentially there for Afterburner-type add-in cards for heavy-duty video processing — they’ve also disappeared, by the way — I always felt like the target market was rather narrowly focused on people in Apple TV and cinema production workflows. 

Another reason modularity is important for this class of workstations is that they’re really expensive and, in a lot of cases, are serviced by IT departments, which are very fond of mixing and matching components, passing them down as the luckier folks get upgraded to newer gear. And being able to spread the expense of the new system over time is appealing, too. 

The compact Mac Studio, the company’s most powerful desktop, replaced the Mac Pro as the flagship performer with the launch of the M3 Ultra, and it aligns far better with Apple’s silicon strategy. The Pro Display XDR debuted in conjunction with the Mac Pro and was intended for that same audience. It too was discontinued this year and replaced with the smaller, more prosumer-friendly and Mac Studio-aligned Studio Display XDR

Now that a lot of the high-end workstation market has pivoted to GPU-intensive AI operations like machine learning (especially deep learning) and related development like robotics, the lack of upgradability has become even more of a downside. It makes sense that Apple would want to funnel its Ultra chips into the more popular Mac Studios.

I suppose the upside is that Apple doesn’t have to scavenge for memory and other components in short supply, thanks to AI-driven shortages, or have to renew its ties with Nvidia, which Apple broke with in 2018.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


What is Artificial Intelligence? 

Artificial intelligence is a computer program that can reason, learn and act like a human. It’s also not the same as machine learning or robotics.

Artificial intelligence isn’t just one type of AI—it encompasses many kinds of technologies with similar goals: autonomous machines that can think for themselves.

The most common forms of artificial intelligence include:

  • Natural language processing (NLP): NLP systems are capable of comprehending spoken words, identifying photos and videos, interpreting natural language, and carrying out pattern detection tasks like spotting spam emails or following individuals on social media.
  • Deep learning: This branch of AI trains computers to detect speech patterns or translate languages by using neural networks, or “deep” nets.

      Want to Become a Master in Artificial Intelligence? Then visit here to Learn Artificial Intelligence Training!

The idea of artificial intelligence has been around for a long time

The idea of artificial intelligence has been around for a long time. The term was coined in 1956 by John McCarthy, but the idea is not new; it’s been around since the ancient Greeks.

The technology needed to build artificial intelligence (AI) has advanced enormously since then, as well as our understanding of how we can best teach computers to do things like recognize speech or understand language.

Artificial Intelligence Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

Key Events In The History Of Artificial Intelligence

AI is a subset of machine learning, a branch of computer science that’s been around for decades. It’s the study of making computers that can think like humans—a task that has long been considered impossible given the limits of traditional computing technology.

AI also has a long history in fiction. Many movies and TV shows have featured AI characters, including HAL 9000 from 2001: A Space Odyssey, Data from Star Trek: The Next Generation, and WALL-E from Pixar’s 2008 movie WALL-E.

1940-1960: Birth of AI in the wake of cybernetics

The term “artificial intelligence” was introduced in 1956. In the 1950s, several scientists and mathematicians developed the first AI programs—first by Allen Newell, J. C. Shaw, and Herbert Simon at Stanford University in California (1956), then at Dartmouth College in New Hampshire (1957), and MIT’s Lincoln Lab (1960). These early experiments involved logic tasks such as theorem proving or semantic networks that have been generalized to other areas over time.

In the 1950s, IBM’s Deep Blue beat Garry Kasparov in Chess. The IBM computer was a combination of hardware and software that could destroy human players at checkers (a board game in which players must alternate placing their pieces on squares). The first chess-playing computer program was developed by researchers Edward Feigenbaum and Stuart Card in 1965. They published it as “Chess-playing Program for Electronic Digital Computer” in their paper “Computer Games: A Survey of Experimental Research and Development” 

In 1966, the first computer to play a game against a human was developed by William Lucas Jr., who used an Unimate industrial robot arm coupled with his programming language called IEC 1962; this machine became known as Deep Thought because its processing speed was so fast that it required only two seconds per move (compared with twenty minutes for humans). It won every match played against humans until 1973 when John McCarthy designed his program called ELIZA—based on earlier work by Joseph Weizenbaum—which used Bayesian inference rather than brute force intelligence; ELIZA successfully competed against human opponents until 1974 when it lost again due mainly to its inability to handle messy real-life situations.

The 1960s and 1970s were the first “AI winters.”

The 1960s and 1970s were the first “AI winters.” During these years, researchers focused on building systems that could recognize images or perform tasks such as playing Chess or translating languages. But these early attempts failed to meet their expectations. They often did worse than humans!

For example: In an interview with The New Yorker in 1968 (and later published in Prentice Hall’s Artificial Intelligence), MIT professor Marvin Minsky said that it would take another 30 years before computers could pass human tests at reading comprehension—and even then it would be a struggle for AI systems to learn much more than basic arithmetic calculations!

1980-1990: Expert Systems

Expert systems are computer programs that emulate the decision-making abilities of a human expert: they use the results of human experts’ decisions to make their own. They were used in many industries, including medicine and law, but their most well-known application was engineering.

In 1980, John McCarthy created an artificial intelligence (AI) research group at MIT called Project MAC (MULTiple ALgorithmic Computer). This project aimed to develop an AI system capable of solving “expert systems” problems—those where you need to make complex decisions based on incomplete data or limited information. One such example would be deciding which car should be purchased based on its price range; another might involve choosing one brand over another based on its reputation for reliability and durability over time.

Want to know more about Artificial Intelligence, visit here Artificial Intelligence Tutorial !

HKR Trainings Logo

Subscribe to our YouTube channel to get new updates..!

AI research became more grounded in mathematics and computer science in the 1990s

AI research became more grounded in mathematics and computer science in the 1990s.

AI researchers began to focus on building machines that could perceive, reason, and act upon the world. This was a new challenge for AI researchers, as they had previously been focused on building computer programs that could perform specific tasks (such as playing Chess) or even solve problems that were too difficult for humans (such as parsing natural language.

AI From 2000-2010 

AI has been a hot topic in the 2000s. In 2002, Google released its first search engine that could understand user queries and return relevant results. The company also created its speech recognition system, which allowed it to convert spoken words into text using machine learning techniques.

In 2005, IBM Watson was introduced as an automated expert system capable of answering questions posed by humans via natural language processing (NLP). By 2010 artificial intelligence had become an essential part of our daily lives—we used it for everything from booking flights to cooking dinner

AI 2010-Present Day 

AI is now being used in many industries. It’s used to give birth to artificial intelligence, which is the ability to make the decisions based on data rather than instinct or intuition. In other words, it can learn through experience and improve over time—and sometimes with human input (like teaching your assistant how to make coffee).

AI is also being used for facial recognition and voice transcription; translation between languages; autonomous vehicles (cars that drive themselves); drones (remote-controlled flying machines); robotics/robotics assistants that assist people with daily tasks like cleaning up after meals or taking out the trash at home.

Despite the increase in automation, humans are still very much needed in many industries

Despite the increase in automation, humans are still very much needed in many industries.

  • Humans are still needed for creativity and innovation. AI can’t invent new products or services; only humans can come up with something truly unique.
  • Humans are still required for problem-solving. AI systems may be able to perform tasks like diagnosing illness. Still, they don’t do it nearly as well as human doctors or nurses do—and often, these systems have trouble making decisions on their own (for example: which drug should be administered first?)
  • Humans are still needed for social interactions with other people and machines in work environments such as factories, where there will always be physical contact between workers and machines (elevators moving up/down floors).   

Because AI is such a young field, we are just starting to see huge breakthroughs.

AI is a young field, and we are just starting to see huge breakthroughs. It’s not just about computers and robots—it’s about how we can use AI to solve problems.

AI has been around for a very long time, but it has only recently seen significant breakthroughs in this field. For example, in 2009, Deep Blue beat Garry Kasparov at Chess (the first time an artificial intelligence program had beaten a human grandmaster). This was an impressive feat because humans are very good at Chess! In 2016 Google developed AlphaGo, which beat Lee Sedol at Go without losing any games; after seeing this result, people were shocked because it seemed like humans would never be able to compete with computers when it comes down to pure strategy gameplay like Chess.

Frequently asked Artificial Intelligence Interview Questions and Answers !!

Artificial Intelligence Training

Weekday / Weekend Batches

Conclusion

We have seen many advances in artificial intelligence over the past few decades. Every year brings new applications and opportunities for technology to make our lives easier. We can see this as a positive trend but also a cause for concern if we don’t keep up with technological advances in AI research. The more we learn about how our brains work and how they can be improved through technology, the better off humanity will be overall. I hope this article helped you.

Related Articles:



Source link