The 'Magnificent 7' in the Generative AI Era

The current growth in the S&P 500 is significantly influenced by the 'Magnificent 7' (Apple, Microsoft, Google, Amazon, NVIDIA, Tesla, and Meta). This has been fueled by foundational advancements across the Generative AI Stack, including purpose-built chips for generative AI and the ability of cloud providers to offer access to their scalable compute power.

Powering AI Applications

🤖 The Generative AI Stack

The Generative AI Stack comprises three distinct layers: Applications, represented by tools like Copilots; Foundational Models and Large Language Models, which form the core of the system; and AI Infrastructure, including Compute Hardware and Chip Design, which is efficiently scaled and made accessible by Cloud Infrastructure Providers for both training and inference.

📱
🧠
⚙️

View

Generative AI Stack as Cluster or Tree

View
Driving the Market

The Rise of the ‘Magnificent 7’

‘Magnificent 7’ was coined earlier this year by Michael Hartnett, investment strategist at Bank of America. He highlighted how the seven largest companies in the S&P 500 had driven most of the index returns so far this year. One of the key drivers of that success can be attributed to AI’s technological tailwinds.

Filter

Compare the shares of S&P 500 vs. the 'Magnificent 7'

Select View
Insights

>28% Share

Apple, Microsoft, Google, Amazon, NVIDIA, Meta, and Tesla represent more than 28% of the S&P 500 Index, an all time high.

+71% Avg. Return

The Magnificent 7 indexed have YTD returned +71%. The S&P 500 has returned +19%, and the remaining 439 companies have returned +6%.

+241% Max Return

Nvidia individually has had the highest YTD return at over 241%. Meta is second at a +150% and Tesla is third at +67% YTD return.

Investments in The Generative AI Stack

💡 Top Technology Investments

All of the Magnificent 7 have invested in purpose-built Compute Hardware and Chip Designs for generative AI. Within the top four, three (Microsoft, Google, Amazon) of the Magnificent 7 are Cloud Infrastructure Providers and control the majority of the market. These providers enable access to compute at scale for AI workloads (training and inference) on the latest semiconductor technology.

Icon 01
1. Apple
Market Cap
$2.950 trillion
Return YTD
+22%
Icon 01
2. Microsoft
Market Cap
$2.81 trillion​
Return YTD
+45%
Icon 01
3. Google
Market Cap
$1.701 trillion​
Return YTD
+49%
Icon 01
4. Amazon
Market Cap
$1.49 trillion​
Return YTD
+58%
📱 Applications
📱 Applications
📱 Applications
📱 Applications
📱 Applications
Copilots
Copilots: None
Copilots: Copilot (Bing, Edge, 365, Windows), ChatGPT (OpenAI)
Copilots: Bard, Duet AI (Workspaces & Google Cloud)
Copilots: Q
Code Assistants
Code Assistants: None
Code Assistants: Github Copilot
Code Assistants: None
Code Assistants: CodeWhisperer
🧠 FMs & LLMs
🧠 FMs & LLMs
🧠 FMs & LLMs
🧠 FMs & LLMs
🧠 FMs & LLMs
Foundational Models and Large Language Models
Large Language Models: None
Large Language Models: Turing, GPT-4 (OpenAI),Dall-E 3 (OpenAI), Llama2 (Meta)
Large Language Models: Vector AI, Gemini, PaLM, Imagen, Codey, Chirp, Llama2 (Meta), Claude (Anthropic)
Large Language Models: Bedrock, Titan, Jurassic-2 (AI21 Labs), Claude (Anthropic), Llama2 (Meta), Stable Diffusion (Stability AI)
⚙️ AI Infrastructure
⚙️ AI Infrastructure
⚙️ AI Infrastructure
⚙️ AI Infrastructure
⚙️ AI Infrastructure
Cloud Infrastructure Provider
Cloud Infrastructure Provider: None
Cloud Infrastructure Provider: Azure
Cloud Infrastructure Provider: Google Cloud
Cloud Infrastructure Provider: Amazon Web Services
Compute Hardware and Chip Design
Compute Hardware and Chip Design: Neural Engine (M Series)
Compute Hardware and Chip Design: Maia, Cobalt
Compute Hardware and Chip Design: Tensor Processing Units (TPUs)
Compute Hardware and Chip DesignTrainium, Inferentia

Custom hardware and chip designs for the bottom three in the 'Magnificent 7' :

  • #5 Nvidia: NVIDIA GPU (Tesla, Ampere)
  • #6 Tesla: Tesla FSD Chip, D1 Chip
  • #7 Meta: Meta Training and Inference Accelerator (MTIA Releasing 2025)
The Success of Fabless Companies

⚙️ Chip Titans & Moore's Law

The bottom most layer of The Generative AI Stack has benefited from recent advancements in the semiconductor industry. All of the Magnificent 7 are involved in the design of purpose-built chips for AI tasks (training and inference), but do not manufacture these custom semiconductors themselves. They are considered fabless companies in contrast to foundries that actually manufacture the chips.

Highlight a 'Magnificent 7' Chip Designer

Tooltip

Enable for All or only the Highlighted Designer

All

Semiconductors power artificial intelligence tasks and advancements in chip density and computing power have been following Moore’s Law. Moore's law describes the empirical regularity that the number of transistors on integrated circuits doubles approximately every two years. These exponential advancements over time have powered revolutionary technologies such as mobile phones, the internet and now AI. In the future, custom semiconductors will power quantum computing.

Apple is the leader in transistor density by design. They hold the top 3 chips in the world by transistor count. Apple solely operates in the foundational layer of The Generative AI Stack, and this focus has placed them at the forefront of chip design.

Shifting Semiconductor Chip Architectures

Transitions from x86 to ARM

Apple led a paradigm shift in chip architectures when they transitioned from traditional x86 (Intel) to ARM with launch of their M Series Apple Silicon. Six of the top ten chips, and eight of the top fifteen chips are Apple Silicon specialized for personal hardware. Below, we compare Apple's chips available for MacBook Pros and Mac Studios.

Sort

Sort by Version or Generation

Sort By

Currently only Mac Studio's have access to M Series Ultra chips.The M Series, M Series Pro, and M Series Max chips are only available for MacBook Pros. The new Apple M3 Pro chip has less transistors than the M2 Pro chip, which is unexpected. The M1 Max from 2021 has more transistors than the 2023 M3 baseline and M3 Pro chips. For those on the market for a new MacBook Pro, the original M1 Max offers better price performance than the newer M3 and M3 Pro chips.

Accelerating Training and Inference

🛠️ Purpose-built for Generative AI

Modern neural networks often require significant computational power to process the vast amounts of data needed for training and inference. CPUs (Central Processing Units) are versatile and can handle a variety of tasks, but they have a limited number of cores which can be inefficient for the parallel processing needs of neural networks. GPUs (Graphics Processing Units), on the other hand, have hundreds of cores designed for handling multiple tasks simultaneously, making them ideal for the matrix and vector operations that neural networks require. They significantly accelerate the training and execution of neural network models.

Cloud providers, recognizing the unique demands of AI workloads, have started to create custom chips tailored to these tasks. These specialized processors, such as Amazon Web Service's Inferentia & Trainium chips, are optimized for the specific matrix operations and data patterns prevalent in machine learning and offer even greater efficiency and speed than general-purpose CPUs or GPUs.

Above we have created bespoke visualizations to represent the transistor count of the top chips from each cloud provider. Though Nvidia is not a cloud provider, their custom GPUs are made available by most cloud providers. The chips are sized by transistor count, then colored and styled by microprocessor type (CPU, GPU, AI Accelerator). The pattern styling for CPUs is linear to represent the linear nature of CPUs. The pattern styling for GPUs shows a grid to represent the parallel nature of GPUs. The pattern styling for AI Accelerators is radial to represent their hybrid nature. This is not a complete representation of all chips available from each cloud provider. Specifications like transistor count are not always publically available, especially for new custom AI chips.

GPUs have revolutionized AI training by offering unparalleled processing power for parallel tasks, a cornerstone of deep learning algorithms. Unlike CPUs, which are designed to handle a broad range of computing tasks sequentially, GPUs excel in simultaneously executing thousands of smaller, more specialized operations. This is particularly advantageous in AI training, where tasks like matrix multiplication – a fundamental operation in neural network algorithms – are abundant. By processing these operations in parallel, GPUs significantly reduce the time required to train complex models. Furthermore, the architecture of GPUs allows for more efficient handling of the large datasets typical in deep learning. This efficiency is not just about speed; it also enables the training of more complex models with larger datasets, pushing the boundaries of what AI can achieve.

While GPUs are the preferred choice for training neural networks, CPUs still play a critical role in AI development. CPUs are incredibly versatile and capable of handling complex logic and control tasks that GPUs are not designed for. In many AI applications, CPUs are used for data preprocessing, managing the AI training environment, and performing tasks that require sequential processing. Additionally, in scenarios where parallel processing is not as critical, such as with smaller-scale models or certain types of machine learning algorithms, CPUs can be sufficient. The synergy between CPUs and GPUs in AI systems provides a balanced approach, with each handling tasks that suit their strengths. This combination ensures that AI training is not only fast and efficient but also versatile and adaptable to various types of AI models and applications.

The Brains Behind AI

🧠 Neural Network Computations

Neural networks are computational models inspired by the human brain, structured in layers of interconnected nodes or "neurons" that work in unison to solve complex problems. Each neuron receives inputs, processes them through a mathematical function, and passes the output to the next layer. The strength of these connections, known as weights, is adjusted during the training process to minimize the difference between the network's prediction and the actual data.

In the context of the demonstrated visualization with three inputs, one hidden layer with four nodes, and two output nodes, the neural network would work as follows: The input layer receives the initial data, which is then weighted and passed to the hidden layer. Each of the four nodes in the hidden layer processes the data in parallel, applying an activation function to introduce non-linearity. The processed data from the hidden layer is then weighted again and passed to the two output nodes. These nodes might represent a binary classification task. The entire network operates in concert, with each layer's output depending on the previous layer's input and are fine-tuned through training to optimize the final prediction accuracy.

Capitalizing across The Generative AI Stack

☁️ Cloud AI Infrastructure

Amazon, Microsoft, and Google play a crucial role across all layers of The Generative AI Stack. They are the only three companies in the 'Magnificent 7' that are Cloud Infrastructure Providers (AWS, Azure, Google Cloud). Apple is the leader in chip design, but it is the only company in the top 4 of the Magnificent 7 that does not offer Cloud Infrastructure as a service.

Cloud infrastructure providers are integral to the entire Generative AI stack, offering the foundational hardware at the bottom, tools and platforms for model development in the middle, and services for application deployment and scaling at the top. They enable the AI ecosystem to function seamlessly, from hardware to user-facing applications.

Top Cloud Providers

Market Share & YoY Growth per Quarter

Linked Views

Filter all views by selecting on Revenue by Quarter. View Market Share (%) by Revenue or YoY Growth (%) by Quarter.

View

Amazon Web Services (AWS) still leads the cloud computing market, though its share is subtly declining as Microsoft Azure’s revenue climbs. AWS’s growth has steadied, reflecting a recent attenuation of enterprise focus on cost optimization. This sector, propelled by AI advancements, presents robust business opportunities and long-term growth prospects driven by new AI-based workloads.