A8 SoC reverse engineered: Revised CPU, new quad-core GPU, TSMC’s 20nm
process
apple-a8-soc-ifixit-640x353

Reverse engineering company Chipworks has published its new report on the Apple A8 SoC, the first 20nm high-end SoC from TSMC and the chip powering the iPhone 6 and iPhone 6 Plus. The analysis shows a chip design that’s a bit different than we initially imagined — and one that confirms some of the expected differences between TSMC’s 20nm and Intel’s 14nm FinFET technology.

First of all, we now have some context for which aspects of the chip have shrunk and which have stayed close to the same size. The CPU cores themselves are far smaller — 12.2mm sq, vs. 17.1mm sq, and may have an individual L2 cache rather than the shared L2 that the A7 utilized. The chip itself, as expected, is still a dual-core — but Chipworks speculates that the L1 and L2 may be somewhat larger. If you’re curious about the function of caches in modern processors, we have a full primer on the topic that you should read.

A8 SoC

Apple A7 SoC

The SRAM L3 cache, on the other hand, hasn’t shrunk by nearly as much (bearing in mind that the CPU cores have added caches and other additional circuitry). The SRAM, at 4MB, has shrunk perhaps 33% compared to the A7 (the A7 has the same 4MB of L3 cache). That’s solid scaling, but it remains significantly below what Intel reported for its own 22nm-to-14nm die shrink. It does, however, fit reports that SRAM scaling would decrease across the board for TSMC and GlobalFoundries, and would be one reason why 20nm and below wouldn’t offer the same density improvements as previous nodes. Obviously the more SRAM a device contains, the greater the impact of this slowing scaling.

Apple A8 SoC, with the DRAM package-on-package PoP removed

The other major component of the SoC is the GPU, and here a number of predictions were off base. Apple, defying expectations of a hefty six-core GPU, has opted for a beefier quad-core model — instead of the hexa-core PowerVR GX6650, Apple has gone with the quad-core GX6450, the successor to the G6430 of last year.

Read: Bendgate: Is the iPhone 6 too bendy, or are your skinny jeans just too skinny?

Exact benefits of the new GPU are unknown, but Ars Technica’s review points to substantial performance degradation if the iPhone 6 Plus is tested in full-screen mode — the new GPU simply isn’t strong enough to keep up with the additional pixels. We saw something similar with iPads after Retina displays went mainstream — the A8 is obviously aimed for the iPhone 6′s sweet spot. Apple’s claim of a 50% graphics boost over the A7/iPhone 5S seems to be accurate.

Apple A8 GPU performance

A modest performance improvement


Ars’ testing also reveals another facet of improvement — the A8 doesn’t throttle back as sharply as the A7 did, and the iPhone 6 Plus can sustain a maximum clock speed longer if you run the tests out long enough. Whether this matters to you is going to depend on how long you use the device, and what you use it for — but the 20nm shift did bring Apple some modest clock speed gains under heavy load.

Ultimately, it looks as though the predictions that 20nm would be a relatively modest step forward compared to 28nm are entirely accurate. The A8′s CPU is modestly faster than the A7′s, but it’s the GPU in the iPhone 6 that really extends a lead (only to be pulled back sharply thanks to the iPhone 6 Plus’s increased pixel count).

If the A8 is an accurate bellwether, we’ll see similar modest-but-noticeable improvements from Qualcomm and the other SoC manufacturers as they adopt 20nm designs in 2015, with more significant gains coming in 2016 as 14/16nm FinFET transitions shape up.

 

 

 

Source      http://www.extremetech.com

 

Reviews:

Post a Comment

Asian Tour Trip © 2014 - Designed by Templateism, Distributed By Blogger Templates | Templatelib

Contact us

Powered by Blogger.