Intel Shows Off New Gen11 Graphics, Teases Xe Discrete GPU


This site may earn affiliate commissions from the links on this page. Terms of use.

Intel’s Architecture Day 2018 this past Tuesday wasn’t just a CPU show. The graphics market is poised to be a significant component of Intel’s strategy going forward, and the company’s Gen 11 solution looks like it’ll be a potent improvement over Skylake. These improvements are long overdue.

For most of the past twenty years, the phrase “Intel graphics” was a contradiction in terms if you cared about gaming. Starting in 2011, with Sandy Bridge, that began to change. There was a period of roughly five years where Intel’s own solutions were improving at a solid pace. From 2011-2015, IGP performance improved in real terms, meaning Intel’s GPUs got faster more quickly than games demanded additional GPU resources. There were still only a relative handful of titles that could be coaxed into running well, but the situation was improving by the year. And then it stopped. Neither Kaby nor Coffee Lake contained any additional 3D optimizations. After 3.5 years in the proverbial wilderness, Intel wants to change that.

The new Gen 11 GPU is Intel’s first TFLOP-class GPU hardware. It implements a tile-based renderer, presumably to take advantage of tiled rendering’s lower power consumption and increased efficiency. The GPU will contain 24-64 execution units and it packs a 4x larger L3 cache.

Other major features include a tile-based renderer that Intel will be able to enable or disable on a per render-pass basis. Memory compression has also improved, with a claimed best-case performance jump of 10 percent and a geometric mean average of four percent. The new Sunny Cove GT2 configuration intended for desktop will pack 64 EUs, compared with 24 EUs in Skylake’s GT2. That alone should be worth some significant performance improvements, subject to memory bandwidth constraints or other bottlenecks within the core.


Intel has also implemented a capability they call Coarse Rate Shading, similar to Nvidia’s Variable Pixel Shading. Coarse Rate Shading reduces the total amount of per-pixel shading work, based on either on the distance between the camera and the area in question or as a function of how close the area in question is to the center of the screen. By stretching the shading for a single pixel over a 2×2 block of pixels, Intel was able to improve performance by 10-20 percent in the demo we saw, depending on where the camera was on-screen. A 4×4 block size improved performance by even larger margins, though the impact on image quality was more visible at the distances players would actually play at.

HEVC decode units have been designed from the ground up, with two decoders and one encoder and support for up to 8K HEVC decoding. Adaptive Sync and HDR will both be supported by this hardware and by the Xe discrete GPU branding that Intel teased as part of the event. Intel demonstrated its new GPU capabilities by showing off Tekken in a side-by-side demo with Gen 9 (Skylake) on one side and Gen 11 (Sunny Cove) on the other. The systems were running Medium detail 1080p with an 85 percent scaling factor, putting the actual resolution around 900p. Performance was significantly better in the Gen 11 demo, despite the early state of drivers and feature support. While it’s completely unclear how Sunny Cove will match up against AMD’s own APUs in late 2019, the jump in EU count and associated changes suggest Intel will at least compete much more effectively with AMD than it has of late.

Now Read:



Source link

Intel Uses New Foveros 3D Chip-Stacking to Build Core, Atom on Same Silicon


This site may earn affiliate commissions from the links on this page. Terms of use.

3D chip stacking has long held promise as a meaningful method of advancing silicon performance, but progress in the field has been slow, to say the least. For one thing, CPUs have a nasty tendency to overheat if you stack cores on top of them. Validation has also been a slow, careful process. But at its Tech Day this week, Intel revealed a new technology for connecting components and creating a 3D chip architecture. Codenamed Foveros (a Greek word said to mean fierce or amazing, though I had trouble finding the definition), this new 3D chip-stacking method could revolutionize product designs in the long term. Intel believes it can solve the problems that affect 3D CPU scaling and avoid the thermal issues that can kill these designs.

The chip Intel demoed today was a Foveros-enabled part with a Core CPU and Atom CPU sharing the same physical silicon. If you’re wondering how already in-market capabilities like EMIB compared to Foveros, the answer is that Foveros is a 3D chip stacking solution, while EMIB is designed for 2D stack solutions. The two technologies are not exclusive and we heard word that Intel was planning hardware that would utilize both.

Foveros-3

As far as what an actual implementation might look like, the block diagram below shows two separate chips mounted to the same package, via an active interposer layer.

This approach could give Intel tremendous flexibility when it comes to designing parts — a fact the company has already revealed to the public. At the request of a customer, Intel built a CPU with both an Atom and a standard Core processor onboard. This kind of heterogeneous combination has been done before, of course — ARM’s big.Little speaks for itself — but this is the first time we’ve seen it in an Intel CPU. The comparison against big.Little, while obvious, isn’t a particularly good one. Foveros is designed to integrate with a huge range of products and in many different capacities, while big.Little was a specific product implementation intended to reduce power consumption.

The final result of this product is a hybrid x86 architecture that can switch between Core and Atom, both of which are built on 10nm. Logic on the die is contained in the bottom chip, while the CPU cores reside up top. Both AMD and Intel are moving to chiplets, but Intel seems to believe doing them in 3D will allow them to gain additional ground. This unnamed part should launch in 2019 as well.

The combined capabilities of EMIB and Foveros give Intel significant reach and flexibility for wiring up hardware in new and interesting combinations. It’s no accident that we’re seeing AMD reaching for chiplets at the same time. More than three years ago, we wrote an article on Moore’s law, noting that the definition had changed over time as new problems presented themselves. Today, continuing with Moore’s law means continuing to improve scaling, integration, and power consumption. It’s an efficiency game. If Intel can actually start scaling 3D chip production, it could reshape how we design cores in the future.

Now Read:



Source link

Western Digital Announces Plans for Its Own RISC-V Processor


This site may earn affiliate commissions from the links on this page. Terms of use.

RISC-V hasn’t been a huge topic for us at ExtremeTech, but the fully open-source CPU instruction set architecture (ISA) has been building momentum in the industry over the past few years as more companies have signed on to build RISC V-compatible processors. While it’s not the first open-source ISA, RISC-V is designed to be used in a wider range of devices than some of the previous work in this space. Now, Western Digital has announced that it intends to build its own RISC-V processor, in what could be a major breakthrough moment for the ISA as a whole.

RISC-V has been under development for years and is intended to be a practical ISA for CPU development rather than strictly an academic exercise. Wikipedia’s entry on the ISA is fairly good if you’re looking for an overview. According to Western Digital, it’s making an investment in its new CPU, SweRV, as part of its goal to ship one billion RISC-V cores in its various storage products per year. WD will build the SweRV core, an open standard initiative for cache coherent memory over a network, (OmniXtend) and an open source RISC-V instruction set simulator. The first two projects are intended to improve Western Digital’s own efforts in the storage market, while the third is useful to the RISC-V community more generally (in addition to WD itself, of course).

western-digita-risc-v-core

SweRV Core. Image credit: Anandtech.

“As Big Data and Fast Data continues to proliferate, purpose-built technologies are essential for unlocking the true value of data across today’s wide-ranging data-centric applications,” said Western Digital CTO Martin Fink in a statement.  In a separate PR release in late November, WD outlined some of its thinking. The company contends the era of Big Data and massive data processing workloads is less amenable to the use of general-purpose architectures to improve compute performance than it once was. Western Digital believes this one-size-fits-all approach to processing is less compatible with the demands of the modern era than what are sometimes called “domain specific architectures.”

RISC-V is Western Digital’s way of looking for a performance advantage with a CPU architecture it can customize to its own needs rather than attempting to fit into a general-purpose CPU. Of course, the kind of processing we’re referring to is fairly low level — the SweRV core is expected to be deployed in SSD and flash controllers, not a traditional CPU socket. This new CPU core is a 32-bit in-order design, with a 2-way superscalar architecture and a nine-stage pipeline. Clocks are expected to be up to 1.8GHz built on a 28nm process node.

Now ReadARM Kills Its RISC-V FUD Website After Staff Revolt, RISC rides again: New RISC-V architecture hopes to battle ARM and x86 by being totally open source, and With Spintronics, Intel Sees Efficiency, Density Scaling Far Beyond CMOS



Source link

50 Years Ago, ‘The Mother of All Demos’ Showed Us How Tech Would Transform the World


This site may earn affiliate commissions from the links on this page. Terms of use.

It can be easy to take for granted the way modern computers and mobile devices work. We can manage our social lives, watch video, work on documents, and more with a simple graphical interface. That was not the case in past decades, but the first hints of the future we now live in came earlier than you might expect. Fifty years ago, Doug Engelbart of the Stanford Research Institute (SRI) appeared on stage to give “the mother of all demos.” In the space of 90 minutes, he showed off revolutionary concepts like the mouse, word processing, and hyperlinks.

The demo took place on December 9, 1968, when the microchips that would drive the computer revolution didn’t even exist yet. Still, Engelbart and the team at SRI were already hard at work on a computer system for creating, managing, and linking files. The researchers had to build their own display in those days, which cost a whopping $90,000 in 1968. That’s the equivalent of about $650,000 today.

The demo at the Fall Joint Computer Conference in San Francisco’s Civic Auditorium was the “coming out party” for this technology, but it wasn’t just a demo. As Engelbart clarified, the technology he showed off was in use at SRI. Like most computers of the day, it relied on a central database connected to multiple terminals. At the time of the demo, SRI had six working terminals with plans to add six more to help researchers get their work done quicker and more efficiently.

Over the course of 90 minutes, Engelbart explains how the programs developed at SRI could store and recall data in ways that past computers never could. Live, in front of a packed auditorium, Engelbart made lists, simple bitmap graphics, and basically uses the system like a PowerPoint presentation at times. The demo also includes linked files, which we might call hyperlinks. Engelbart calls it “jump on a link.”