banner

"The shortage of advanced packaging capacity "will take another 18 months, and TSMC's revenue in August, which "stuck NVIDIA's neck", hit a seven-month high

    From January to August this year, TSMC's revenue totaled NT$1.36 trillion, down 5.2% from the same period in 2022. The chipmaker lowered its full-year revenue forecast, expecting an annual decline of 10 percent, higher than a mid-single-digit percentage contraction previously expected.
    This decline is inseparable from the downward cycle of the chip industry this year, but with the help of AI chip and iPhone 15 orders, TSMC is expected to achieve its revenue expectations for this quarter.
    TSMC has said that the growth in AI-related demand will support strong demand for 3nm chips, thereby offsetting the impact of customers' continuous adjustment of inventory. Apple is the first adopter of TSMC's most advanced technology, and the NVIDIA AI chip H100 GPU uses TSMC's 4nm manufacturing technology.
    CoWoS packaging capacity is tight, and the shortage of NVIDIA AI chip supply may continue until 2025
    It is worth mentioning that the surge in demand for AI chips has also made TSMC's production capacity in advanced packaging strained.
    TSMC Chairman Liu Deyin recently pointed out at a public event that the demand for CoWoS has tripled in the past year. At present, TSMC cannot meet 100% of the needs of customers, but will try to meet about 80% of the demand.
    Liu Deyin believes that the shortage of CoWoS packaging capacity is only a temporary phenomenon, and as TSMC expands packaging capacity, the supply shortage will take about 18 months to ease. This means that NVIDIA's data center GPUs may be in short supply for some time to come.
    Today, many of the most advanced chips on the market consist of multiple semiconductor chips that are manufactured separately and then joined together, and one of the most common technologies for joining the chips together is CoWoS.
    The CoWoS package is a key bottleneck in NVIDIA's chip production capacity, and the two technologies of HBM and CoWoS packaging complement each other. HBM's high requirements for pad count and short trace length require CoWoS advanced packaging technology to achieve high-density, short-term connections that cannot be achieved on PCBs or even package substrates.
    At present, almost all HBMs use CoWoS packaging technology. TSMC uses CoWoS technology to produce NVIDIA's flagship H100 graphics card, but due to explosive demand, TSMC's production line is difficult to fill the gap between supply and demand even if it is full power.
    According to Quartz, some server manufacturers have to wait six months to get their hands on the H100 chip.
    Expand production capacity and strengthen R&D
    To solve this problem, TSMC is expanding its production capacity, and TSMC has opened three new factories in Zhunan, Longtan and Taichung, of which the Zhunan plant covers an area of 14.3 hectares, which is larger than other packaging plants combined.
    At present, TSMC is building a new factory in Miaoli, Taiwan to increase chip packaging production capacity. The project is expected to cost $2.9 billion.
    In addition to expanding production capacity, TSMC is also investing in research and development. Earlier this year, the company detailed a new version of its CoWoS technology it is developing called CoWoS-L. The technology will allow TSMC customers to add more transistors to the chip, speeding up processing.

GPT-5 Secret Training Exposed! Will ChatGPT usher in another blockbuster upgrade?

    The future is roaring, and since the birth of ChatGPT at the end of last year, the world has directly set off round after round of generative AI frenzy. While people were still immersed in the revolutionary power of GPT-4, news came that OpenAI was secretly training GPT-5. Perhaps in the near future, how to coexist with AI will be a necessary topic for life thinking. Mustafa Suleyman, co-founder of DeepMind and CEO of Inflection AI, revealed the latest news about GPT-5 in an interview. OpenAI is secretly training GPT-5.
    In April, Sam Altman, the head of OpenAI, made it clear that the company was not developing GPT-5.
    However, Mustafa Suleyman believes that Sam Altman did not tell the truth, which means that his previous remarks were more like smoke bombs to confuse opponents. In addition, Mustafa Suleyman talked about the future direction of Inflection AI.
    Over the next 18 months, Inflection AI-trained models will be 100 times larger than current cutting-edge models. In the next 3 years, Inflection's model will be 1,000 times larger than it is today.
    Currently, Inflection AI already has 6,000 H100s training models, and by December, 22,000 H100s will be fully operational, with 1,000 to 2,000 H100s added every month from now on.
    Interestingly, Mustafa Suleyman said in the interview that he wants all companies with large-scale computing power to be as transparent as possible, which is why they disclose the total amount of computing they have.
    On August 31, Israeli artificial intelligence startup AI21 Labs announced that it had raised $155 million in a Series C round of financing, with tech giants Alphabet and Nvidia participating in the financing.
    In March this year, AI21 Labs released its latest large-language model, Jurassic-2, but did not disclose the size of the model parameters.
    In contrast, the largest version of the previous generation model Jurassic-1 has 178 billion parameters, which is one of the largest LLM models currently on the market, larger than OpenAI's GPT-3 with 175 billion parameters. As a result, AI21 Labs is often considered OpenAI's biggest competitor, and even NVIDIA CEO Jensen Huang praised the company, calling it "accurate, trustworthy and reliable."
    The current Jurassic-2 family includes three different sizes of basic language models: Large, Grande, and Jumbo, as well as language models with instruction tuning for Jumbo and Grande.
    In addition, they have introduced five task-specific APIs: paraphrasing, summarizing, text suggestions, syntax correction, and long text segmentation that users can choose from.

The core theory of the Internet is shattered, and the United States joins the Japanese and European automated driving camp

    Autonomous driving is becoming increasingly hot, and the formulation of self-driving car cybersecurity related documents is also on the agenda in various automobile manufacturing countries, but when the "momentum" of countries in this regard gradually expands, who can preemptively formulate a set of business rules that are more in line with their own technical routes, has become the United States, Japan, European car companies in the field of automatic driving to seek a step ahead...

From the level of autonomous driving, let's see what is the difference between autonomous driving and assisted driving

    In recent years, the word "automatic driving" has turned purple, and recently Tesla's murder has been on "automatic driving" on the headlines. From the cutting-edge car brand Tesla to the technology giant Google, to the traditional automakers Chevrolet, Ford, BMW, the family of self-driving research is growing stronger and stronger. However, even though we hear more and more about "autonomous driving", do we really understand the concept of autonomous driving? Today, Xiaobian will start from the level of automatic driving, and explain the difference between automatic driving and assisted driving...

How far is autonomous driving? How much do you understand?

    Tesla, which has frequently "accidented" recently, has made many people worry about automatic driving, which technologies are not yet mature, and what is the solution? I believe that it is a question in the minds of many people.

     In fact, for autonomous driving, maybe your understanding is still a little misunderstood. Intelligent internal reference once shared an automatic driving report by the Boston Consulting Group, which explained in great detail that the state of automatic driving is hierarchical, level 0 all need to be operated by people, and level 5 automatic driving does not need people to do any operation at all...

Six unreliable autonomous driving popularization will have to wait another ten years

    From primary electronic assistance systems such as blind spot warning and lane maintenance assistance, Tesla's driverless and autopilot technology has stood on the cusp and advanced by leaps and bounds at an amazing speed. You may think that the popularization of these technologies took a short period of two to three years. Recently, Mercedes-Benz's driverless bus completed the 20-kilometer self-driving road test for the first time in the Netherlands, and the domestic local car manufacturer Haval also demonstrated a self-driving car based on H8... Reports about autonomous driving appear frequently, making many people feel that autonomous driving is really close to us...

0