ARM Announces Neoverse Infrastructure for 5G, IoT, Edge Computing

This site may earn affiliate commissions from the links on this page. Terms of use.

A decade ago, ARM established the Cortex brand and an associated ecosystem of products stretching from tiny embedded cores suitable for ultra-low-power computing to high-end smartphone products. Over the last ten years, the Cortex family has grown tremendously, transforming ARM from one mobile CPU designer among a number of companies (albeit one of the largest in the space, with MIPS in second place) to the overwhelming market leader. ARM’s mindshare has grown at the same time, and while the company doesn’t enjoy the same brand recognition as Intel at the height of the PC era, there’s no doubt that companies now hype the advantages of new ARM chips much more than they did in years past. Now, ARM wants to take that expertise and apply it to a new infrastructure push — this time, focusing on the IoT and edge computing rather than on smartphones.


The goal of this new push, as the slide says, is to create a support infrastructure capable of handling the sheer number of IoT devices expected to come online in the next few years. For a loose analogy, consider your Wi-Fi router (if you even had one) circa 2006. Most of the devices connected to said router would have been PCs. I’m not ruling out the occasional phone, but most people didn’t have smartphones in 2006 and there wasn’t any point to connecting a feature phone to Wi-Fi, even if feature phones of the day had supported it.

Today, your router likely supports a mélange of tablets, phones, and PCs — along with any game console(s), televisions, smart speakers, and other home appliances. In the Home of the Future™, manufacturers are bracing for a 10x jump in devices, if not more. Meanwhile, the advent of 5G is expected to cause the number of base stations to explode due to range limitations and line-of-sight restrictions. The end result? A trillion or more devices online over the long term that don’t even exist today, from major purchases to the most trivial gadgets.

ARM intends to leverage the same IP blocks it’s using in its smartphones in its embedded products, though obviously, these may have additional customized I/O blocks and other features that wouldn’t normally be found on smartphones. At the heart of the platform are the same CPUs that power its Cortex chips, with the Cosmos platform available today and future products based on node shrinks and additional efficiency improvements. Overall, ARM is claiming it will deliver 30 percent generation-on-generation performance improvements from the Cortex-A72 and A75 today, right through to the Poseidon core built on 5nm and expected to arrive in 2021.

There will also be ecosystem support for capabilities like FPGAs and machine learning co-processors, with support for PCIe, CCIX, and 100G+ ethernet over time, CPUs that scale from simple quad-cores all the way up to 128 “big” cores, up to 128MB of system cache, 1TB/s of bandwidth, and 8 DDR4 channels with support for up to eight HBM stacks. ARM’s foundry partners for the initiative include TSMC, Samsung, and UMC for now, though there’s the possibility for other foundry partners to be added if they move to appropriate nodes (only three firms are currently building on the leading edge, but it’s possible that other foundries will opt to deploy 7nm technology at a later date, as has been the case with 14nm).

The overarching goal is to duplicate ARM’s success in mobile and create a cloud ecosystem capable of matching the kind of infrastructure that only a few companies on Earth really sell. Make no mistake — this is a major shot across the bow of companies like Intel, even if the two firms only compete in some of the areas that the Neoverse will touch. ARM is making a major ecosystem play here. If the company can duplicate the success it had in mobile, it’ll have an incredibly strong position across multiple core markets.

Now Read: Samsung, ARM Expand Collaboration at 5nm, World’s Largest ARM-Based Computer Launches, and Japan Tests Silicon for Exascale Computing

Source link

TSMC Announces First EUV 7nm Risk Production, 5nm Tapeouts in Q2 2019

This site may earn affiliate commissions from the links on this page. Terms of use.

The foundry business was shaken earlier this year when GlobalFoundries announced it would leave the leading edge and no longer planned to offer a 7nm process. TSMC, on the other hand, is eager to tell its customers that new node ramps, including the introduction of uncertain technologies like EUV, are proceeding on schedule.

As EETAsia discusses, the firm has now taped out its first 7nm design to use EUV (Extreme Ultraviolet Lithography). EUV’s introduction has been nearly two decades in the making and the technology faces continued challenges as it ramps to lower nodes, but the way seems clear for its early insertion into manufacturing at this stage.

TSMC is also looking to offer new packaging options to help chip firms design faster products. As nodes shrink, wire resistance has become an increasingly dominant reason why clocks can’t be scaled up more effectively, and there are some packaging alternatives that offer theoretical improvements to these issues. It’s an example of how foundries are having to incorporate technology that touches on multiple aspects of the design and manufacturing process in order to continue delivering improvements rather than being able to count on lithography scaling to deliver the usual performance jumps year-on-year. The overall ramp of EUV is expected to be slow, as shown in the image below:


The announcement specifies that it taped out an EUV design that can use that technology on up to four layers, while the 5nm node will be capable of deploying EUV on up to 14 layers. One of the challenges to deploying EUV at 5nm is the current lack of a pellicle solution; this design presumably uses EUV for contacts and vias, which don’t require a pellicle (a pellicle is a protective, transparent shield that prevents debris from falling on the wafer). The problem with pellicles as they relate to EUV is that it’s very difficult to make a pellicle that’s completely transparent to EUV light (extreme ultraviolet light is absorbed by virtually everything, including ambient air, which is why EUV tools have to operate in near-vacuum conditions).

The 7nm TSMC is shipping now is based on conventional lithography rather than EUV. EUV isn’t really expected to introduce anything new as far as performance or power is concerned. Its chief benefit is in making chips cheaper to build for foundries and enabling lower node scaling in the future. The 5nm node is predicted to offer a 15 percent performance improvement or a 20 percent power reduction (but not both) with an overall 45 percent reduction in area. It’s not clear what will come after 5nm — manufacturers have sketched out paths to lower nodes, but with EUV still held up over pellicle solutions and the continued departure of foundries from the leading edge (We’re down to Samsung, Intel, and TSMC), it’s not clear how much of a runway still exists for companies to continue advancing the collection of technologies that we collectively refer to as “Moore’s law.”

If TSMC is able to enter risk production in Q2 2019, we could see chips in-market 12-15 months later, in 2020. That would put TSMC’s 5nm node with EUV up against Intel’s conventional lithography at 10nm, assuming Intel’s 10nm is indeed in-market by that point. AMD’s decision to move its products to TSMC means that the two firms are in direct competition again, though AMD has made no announcements about any 5nm migration and could skip the node if it turns out to be more of a mobile product, in much the same way that it skipped 10nm.

Now Read: TSMC Coming Back Online After Major Virus Issues, Intel Reportedly Won’t Deploy EUV Lithography Until 2021, and EUV Integration at 5nm Still Risky, With Major Problems to Solve

Source link

Windows 10 October 2018 Update Has Dropped

This site may earn affiliate commissions from the links on this page. Terms of use.

At its Surface event on Oct. 2, Microsoft announced that its latest Windows 10 update would be available immediately. Microsoft hasn’t put as much of a push behind publicizing the features baked into the latest update as it has in previous cycles, so what, exactly, is under the hood for Redstone 5?

Quite a few things, it turns out, though most of them fall under the category of minor improvements. There’s a new dark theme for Explorer if Dark Mode is enabled (shown below), live folders can be renamed in Start, and a new “Safe removal” feature will tell you if any current applications are running on an external GPU. Yanking an external GPU while it’s actually running an application is generally a bad idea — best case, the app crashes. Worst case, the entire system crashes. Notifications like this should help prevent this kind of problem.


There’s a new Windows screenshot tool (accessed via Win+Shift+S), and copied content can now be sent to a cloud-enhanced version of the Clipboard that syncs between devices and with the cloud (this functionality is accessed via Win+V). Wireless displays are now supported in three different modes (game, productivity, and video) and the Microsoft Game Bar has supposedly been redesigned with new features that will offer additional performance information. There’s little word on actual performance enhancements, but this isn’t all that surprising — Windows 10’s game performance has already been optimized and if there was much in the way of low-hanging fruit to clear out of the way, Microsoft would have incorporated it as a general feature of the OS rather than a mode attached to the company’s Game Bar capability.

Storage Sense can now move files into online-only mode if they haven’t been accessed locally, saving space; pen users can ink directly into text boxes; and here’s an interesting tidbit: “Users can now view the real world when using Windows Mixed Reality using a headsets built-in camera.” Granted, Mixed Reality headsets haven’t set the sales world on fire, but that kind of integrated camera functionality could make VR headsetsSEEAMAZON_ET_135 See Amazon ET commerce easier to use one day. At the very least, it might lead to fewer stepped-on cats.

Another welcome change, if it works? Windows Update is now supposed to use machine learning to determine when to install or not-install updates. We’ll see how effective this actually is; I haven’t been enamored with Microsoft’s previous efforts to determine when rebooting a machine is a good idea or not.

Windows Central has a full writeup on these improvements and others, including various minor enhancements to Edge (the “Show in Folder option” has finally been added to the browser’s download tab). When a tab is playing volume, it’ll now light up when you hover over it — and tabs can be preemptively muted before they start playing audio from a context menu.

Collectively, there don’t seem to be many killer features in this update. But there are a lot of small, quality-of-life improvements across various applications and capabilities. Find/Replace in Notepad, for example, now supports wrap-around functionality, a feature the app has literally never had before. Unfortunately, you can also now “Search with Bing!” in Notepad, a feature I’m fairly certain nobody who uses Notepad in the Year of Lord 2018 has ever requested.

Ah well. Nothing’s perfect.

Now Read: Windows May Be Storing All Your Email and Docs as Unencrypted Plaintext, Microsoft Backs Down, Won’t Warn Users Away From Using Chrome, Firefox, and Next Windows 10 Update Will Auto-Move Files to OneDrive to Free Space

Source link

AMD May Regain 30 Percent Desktop Market Share By Q4 2018

This site may earn affiliate commissions from the links on this page. Terms of use.

We’ve speculated that Intel’s CPU shortage could be significantly good news for AMD and it seems some industry sources share that opinion. There are predictions that AMD could pick up as much as 30 percent of the desktop market by the end of Q4 2018 as Intel’s woes continue.

This prediction comes courtesy of DigiTimes, which has a somewhat scattered track record where these things are concerned. Supposedly this rebound is due in part to AMD’s moving foundry production to TSMC, which makes little sense — AMD’s 7nm chips won’t have launched by Q4. The only way for the move to have an impact on AMD market share without 7nm parts available would be if OEMs received AMD’s foundry shift as evidence that the company would be more competitive in the future and are therefore more willing to move orders to the smaller CPU manufacturer. The site writes:

Desktop and motherboard vendors including Asustek Computer, Micro-Star International (MSI), Gigabyte Technology and ASRock have ramped up production and shipments of devices fitted with AMD processors, driving up the chipmaker’s share of the desktop processor market to over 20% in the third quarter. The company is very likely to see the figure further rebound to the level of 30% again.

On the other hand, if AMD is going to see significant gains, this is where we’d expect them to happen. As we recently reviewed, AMD’s strongest position across the entire PC industry is in desktops. Its Ryzen Mobile chips have begun to take some market share but the last figures we saw for AMD in this space put it under 5 percent of the mobile market and no real presence in slate PCs.


AMD channel market share based on European retailer

The desktop market share gains are great for AMD — any pickup is good for the company — but ultimately they won’t be sufficient to achieve the company’s long-term goals. Currently, desktops only account for roughly 20 percent of the PC space. AMD moving from, say, 15 percent to 30 percent of the desktop market would mean moving from ~3 percent of the overall consumer market to ~6 percent. That doesn’t exactly sound like much, but appearances can be deceiving. In Q2 2018, AMD reported Computing and Graphics revenue of $1.09B, up 1.64x year-on-year, driven primarily by RyzenSEEAMAZON_ET_135 See Amazon ET commerce sales. It’s important to keep this in mind when evaluating Ryzen and AMD’s overall performance — while the company may not pick up huge amounts of share in absolute terms, its market had shrunk to the point that even small gains can yield significant financial upside.

DigiTimes is predicting 5 percent server market share for AMD by the end of Q4, which fits with Lisa Su’s own mid-single-digit projections for the year and even Intel’s comments on the topic. As we swing towards the end of 2018, there’s no arguing that AMD is in a much stronger position than it’s occupied for at least a decade. The big question now is whether the company can follow up effectively at 7nm. With no Intel 10nm hardware expected until Q4 2019, AMD will have a first-mover advantage and opportunity to demonstrate how effectively it can match or exceed the performance of Intel’s Coffee Lake with its own Ryzen 2.

Now Read: If Intel Is Suffering a CPU Shortage, Can AMD Pick Up the Slack?, AMD Lists New, Higher-Power Ryzen Mobile CPUs, and PC Market Could Shrink 5-7 Percent in Q4 2018 Thanks to Intel CPU Shortage

Source link

Chrome 69 Is a Full-Fledged Assault on User Privacy

This site may earn affiliate commissions from the links on this page. Terms of use.

Maybe Microsoft had a point.

Eleven days ago, we excoriated Microsoft for its now-scuttled plan to add “warnings” to Windows 10 that would nudge users away from using Chrome and Firefox and towards Microsoft’s own browser, Edge. After ferocious outcry, Redmond backed away from this plan, rightly perceiving the issue as a bridge too far when it comes to spreading FUD about its competitors in an attempt to boost its browser’s market share. But Google’s most recent behavior with Chrome 69 isn’t doing it any favors, either, and the company has adopted some new approaches that blur the difference between what it means to be logged into Chrome or not, overriding previous user settings in the process. The company’s explanation for these behaviors, furthermore, does not hold water.

Let’s start at the beginning. Prior to Chrome 69, Chrome offered an optional sign-in feature. This feature had nothing to do with your various accounts on services like Gmail or YouTube — instead, it allowed Google to synchronize things like cookies and bookmarks across all of the devices on which you used Chrome services. Many people embraced the feature, but Google kept it opt-in. The old login icon looked like a blank outline of a person. When clicked, it displayed the following message:


But now, Google has changed this message. Download and install Chrome 69, and the browser now treats this sign-in option as exercised if you log into any Google account. In other words, Google now treats the Chrome sign-in and the Google account sign-in as equivalent.

There was no reason to make this change. The stated rationale for this change, as expressed by Google engineer and manager Adrian Porter Felt is as follows (thread linked below, but we’ll summarize:)

This makes superficial sense. The idea is that people thought they were signing out of Chrome when they were actually signing out of a content area. When devices are shared, this could lead to people with cross-cookie contamination (someone else’s cookies and preferences being loaded instead of your own). And sure, that’s a problem. But as cryptographer and professor Matthew Green points out, this is only a problem for people who sign into Chrome in the first place. If you don’t sign into Chrome, Google’s “fix” didn’t fix anything for you. It broke things. It’s leading to confusion precisely because Google no longer differentiates whether you’re signed into the browser or not. Now, when you sign into Chrome (because now you’re forced to sign into Chrome), you see a new menu in which it isn’t clear what the big blue “Sync as Matthew” button even does. Does it mean you are synced already, or is it inviting you to initiate a sync?


Image by Matthew Green

These changes are all part of what’s known as a dark pattern. If a pattern is defined as a regularity in the world (designed or naturally occurring) that repeats in a predictable manner, a dark pattern is an attempt to trick users by designing interface options that look like the options users expect to see. The following is an example of a dark pattern from Google’s privacy settings that we covered back in 2016:


Notice how the boxes work. The information in the Photos, YouTube / Videos, +1, and Reviews tabs are shared with others if you put a checkbox in those boxes and kept private if you remove the check. But if you remove the checkbox from the “Photos and Videos” section, you give Google permission to share that information. If you want your Google Plus profile to be maximally private, you want to remove all of the check boxes from the first set of options and put a checkbox in the Photos and Videos option.

First, the company trains you to expect the UI to act a certain way, then it changes the actions of the UI mid-stride so you pick the action it wants you to choose rather than your actual intended result.

As Green writes:

Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern. Whether intentional or not, it has the effect of making it easy for people to activate sync without knowing it, or to think they’re already syncing and thus there’s no additional cost to increasing Google’s access to their data.

It’s not clear if clicking “Sync” is all you need to do or not. Some have seen the Sync feature fully activate from clicking it once, but two-factor authentication may have been involved in that step.

But this kind of pattern deployment is fundamentally toxic to trust. It’s particularly toxic for a company that’s proven so willing to end-run around user expectations, including promising two years ago not to track users who turned off location tracking, only to later admit that hey, it’s still tracking users who turn off location tracking. Google has also acknowledged allowing third parties to sweep Gmail for data as well.

On a personal note, it’s deeply unsurprising to see Google do this. Green points out that Google is promising to respect a user’s sync settings after deliberately breaking the conventions that end users were using to tell Google they didn’t wish to sync their software across devices. But this is unsurprising. It’s exactly what Google did years ago with its own opt-out system for automatic updates. The company establishes a mechanism by which users can opt out of something, then breaks that mechanism if too many people opt out of it. We’re supposed to trust that Google will respect the decision of people who don’t want to sync their data with its servers when it just broke the mechanism by which people previously notified it that they did not wish to synchronize with its servers? Muddying the waters with a login that isn’t a login and a “Sync” panel that can seamlessly activate a feature users don’t want aren’t improvements — they’re just as scummy as the games Microsoft played with its Windows 10 update tool near the official end of the free Windows 10 rollout period.

This kind of behavior is profoundly damaging to any conception of trust. Combined with the endless privacy scandals coming out of Google and the company’s willingness to help the Chinese government spy on its own people and it’s worth asking why we respect this company at all.

Now Read: Google’s Chinese Search Engine Reportedly Links Results to Phone Numbers, Google Confirms It Still Tracks Users Who Disable Location Tracking, and Microsoft Backs Down, Won’t Warn Users Away From Using Chrome, Firefox

Source link

Nvidia Announces New Tesla T4 GPUs For Data Center Inferencing

This site may earn affiliate commissions from the links on this page. Terms of use.

With Turing fast approaching for consumer cards, Nvidia is bringing new GPUs to market for data center and the HPC universe as well. Last week, the company announced its new T4 GPU family, specifically intended for AI and inference workloads and taking over for the Tesla P4 in this role.

Nvidia claims the new GPU is up to 12x more power-efficient than its Pascal predecessor. The company has released a suite of benchmark tests showing the T4 blasting past its competition, though as always, such vendor results should be treated with a grain of salt. We’ve seen Intel release test results claiming its own Xeon processors are excellent at inference, for example. The degree to which this is or is not true is likely the result of optimization flags and specific test configurations or scenarios.


Specs on the new T4 are impressive. 16GB of GDDR6 feeds a cluster of 2560 CUDA cores and 320 Turing Tensor cores, all within a svelte 75W power profile. THG reports that the Tesla T4 has an INT4 and even an experimental INT1 mode, with up to 65TFLOPS of FP16, 130 TFLOPS of INT8, and 260 TFLOPS of INT4 performance on-tap. The older P4, in contrast, offers 5.5TFLOPS of FP16 and 22 TFLOPS of INT8. Nvidia says there are optimizations for AI video applications as well and a buffed-up decoder that can handle up to 38 HD video streams simultaneously.

Alongside the new T4, Nvidia is also launching new tools for development, including a refresh of its TensorRT software package and a new, Turing-optimized version of CUDA (CUDA 10) that includes libraries and models that have been optimized for Turing. Watch for the battle over inferencing to heat up in 2019. Intel is pushing into that space with Xeon, AMD wants a piece of the pie for its Radeon Instinct line-up of machine accelerators, Nvidia’s Turing is going to play in this space, and then, come 2020, Intel will have GPUSEEAMAZON_ET_135 See Amazon ET commerce architectures of its own to compete with. And that’s before you consider the custom accelerators that everyone from Fujitsu to Google has been building and deploying.

It’s not always clear how these technologies will impact consumers; Nvidia’s push to introduce ray tracing and DLSS are the most prominent example we have so far of a company working to take the designs it’s built for HPC and bringing them over to the consumer space. We don’t yet know if it’ll work. But there’s clearly a multi-way fight brewing between the largest titans of the industry — and Nvidia wants to take an early leadership position with its new line of GPUs.

Now Read: Nvidia AI Erases Noise From Images, Nvidia Unveils Turing GPU Architecture, and Tesla Dumps Nvidia, Goes it Alone

Source link

F-Secure Says Almost All Computers Are Vulnerable to New Cold Boot Attack

This site may earn affiliate commissions from the links on this page. Terms of use.

Look at that laptop over there, lid closed and sleeping soundly. It looks safe and secure, doesn’t it? Well, there’s a good chance that it’s vulnerable to a cold boot attack that could compromise your data. According to security firm F-Secure, almost every computer is vulnerable to this type of attack.

At the heart of this attack is the way computers manage RAM via firmware. Cold boot attacks aren’t new — the first ones came along in 2008. Back then, security researchers realized you could hard reboot a machine and siphon off a bit of data from the RAM. This could include sensitive information like encryption keys and personal documents that were open before the device rebooted. In the last few years, computers have been hardened against this kind of attack by ensuring RAM is cleared faster. For example, restoring power to a powered-down machine will erase the contents of RAM.

The new attack can get around the cold boot safeguards because it’s not off — it’s just asleep. F-Secure’s Olle Segerdahl and Pasi Saarinen found a way to rewrite the non-volatile memory chip that contains the security settings, thus disabling memory overwriting. After that, the attacker can boot from an external device to read the contents of the system’s RAM from before the device went to sleep.

You can see the process in the video below. It’s obviously quite involved, but an experienced attacker could get it done in a matter of minutes. F-Secure’s description of the attack seems intentionally vague on how exactly you modify the firmware security, but we are assured it’s “simple.” Perhaps the one saving grace here is that someone needs to have physical access to your computer and enough time to take it apart in order to steal any data. Some computers aren’t very easy to disassemble these days, either.