Позначка: graphics cards

  • Micron’s GDDR Acquisition: A New Challenge for Gamers

    Amid a memory shortage, Micron is taking steps to acquire GDDR modules for gaming graphics cards to develop an alternative to HBM. According to a report from ETNews, Micron is exploring the potential of combining GDDR modules to create a solution with significantly increased capacity.

    The report highlights that initially, a multi-layer GDDR architecture will be employed, consisting of approximately four layers. Prototypes are anticipated to emerge as early as next year. While the multi-layered GDDR assembly is a novel concept, the report does not delve into technical specifics. Previously, Micron had investigated multi-layer DRAM assembly, including LPDDR5X up to 16-Hi, achieving a capacity of 256GB per module.

    Earlier, it was reported that Google’s TurboQuant caused a reduction in DDR5 prices, whereas gaming PCs are expected to see a price increase of 15-30%. This development could exacerbate the market crisis, much to the dismay of gamers. Currently, major manufacturers are striving to meet the demands of AI infrastructure development. Although traditional HBM technologies have sufficed for training advanced LLMs, the focus on memory is becoming increasingly crucial.

    There are several potential solutions for Micron to address multi-layered GDDR configuration challenges by sacrificing clock frequency. The company, however, is committed to pioneering innovative solutions. Micron previously faced setbacks with HDM4 after NVIDIA’s certification delays. Despite presenting modules for Vera Rubin, shipments were reduced while competitors like Samsung expanded their supplies. A multi-layered GDDR configuration could have market potential if it proves economically more viable than HBM.

    The report reiterates that while GDDR-based solutions will lag behind HBM in performance, they will offer greater memory capacity to support modern inference tasks. GDDR has not been as heavily impacted by the AI boom as LPDDR or DDR, as these memory modules have traditionally been limited to graphics processors. Considering Micron’s plan to vertically integrate GDDR for a new AI industry solution, the company might possess sufficient production capacity. Instead of alleviating the video card market shortage, Micron seems focused on meeting corporate demand.

    However, the multi-layered design of LPDDR5X is considerably simpler compared to GDDR, given that the former is an energy-efficient module with acceptable heat dissipation. In the case of GDDR, Micron may face challenges in maintaining thermal integrity and signal integrity if wired connections are used.

    Market turmoil might be averted as Google has developed a quantum algorithm that reduces AI memory requirements by six times.

    What are your thoughts on this article? Votes: Excellent, Decent, Just Okay, Frustrating, Annoying!

  • Breaking the Limits: Teclab Pushes RTX 50 GPUs Beyond 36 Gbps

    Teclab has successfully bypassed the frequency restrictions imposed by NVIDIA on RTX 50 series graphics cards, managing to overclock the RTX 5070 Ti to a remarkable speed of over 36 Gbps.

    The RTX 50 series, equipped with GDDR7 memory, typically supports data transfer rates between 28 Gbps and 30 Gbps. For the RTX 5080, the claimed effective clock speed is 15000 MHz, while for the other models in the series, it’s 14000 MHz.

    The performance results in the Unigine Superposition benchmark are quite impressive:

    Teclab plans to showcase even greater overclocking potential and set new records in an upcoming video. They will be testing the RTX 5070 Ti HOF from GALAX, achieving outstanding performance for GDDR7 modules with a standard bandwidth of 28 Gbps.

    Previously, we reported complaints from NVIDIA RTX 50 graphics card owners about the latest Game Ready driver version 595.71, which was limiting voltage and reducing performance. Additionally, Lenovo and Asus accidentally listed a yet-to-be-released NVIDIA RTX 5070 mobile version with 12 GB in their laptop specifications.

    Currently, overclocking applications allow setting a maximum clock speed of around 3 GHz, slightly higher. NVIDIA restricts higher clock speeds. According to Burti_TecLab, they used the simplest GALAX 5070 Ti 1-Click OC model, without any shunt modifications. They simply circumvented the power limit.

    The software couldn’t read the power consumption of the graphics card because the multiplexer was disabled. The initial run was without overclocking. During the second run, manual overclocking was applied to exceed 3.3 GHz. This method involves tricking the GPU’s clock management system at a logical or programming level.

    This modification convinces the GPU that it’s operating at a base or lower clock speed, while in reality, the clock speeds are significantly higher. This applies to both GPU and memory frequencies. Even though the monitoring application showed a drop in clock speed to 3.1-3.2 GHz and memory bandwidth at 28 Gbps, during the third overclock, these values were even higher than during the second attempt. The RTX 5070 Ti achieved speeds exceeding 36 Gbps (over 18000 MHz).

    What are your thoughts on this article? Feel free to share your opinions: Fascinating, Interesting, Surprising, or Frustrating!