Linley Newsletter: March 29, 2018

 weSRCH's Best of the Internet Award
  29th-Mar-2018
 696


Linley Newsletter

(Formerly Processor Watch, Linley Wire, and Linley on Mobile)

Please feel free to forward this to your colleagues

Issue #593

March 29, 2018


Independent Analysis of Microprocessors and the Semiconductor Industry

Editor: Tom R. Halfhill

Contributors: Linley Gwennap, Mike Demler, Bob Wheeler


In This Issue:

- IC Economics Limit Performance

- Marvell Doubles PAM4 PHY Density

- Nvidia Shares Its Deep Learning

- NXP Pushes i.MX8M Mini to 14nm

- NXP Refits i.MX and Kinetis for IoT


Linley Processor Conference: Last Chance for Free Registration!

Free on-line registration ends at 5 p.m. PDT on April 5, 2018

On April 11-12, 2018, The Linley Group will host our Linley Spring Processor Conference at the Hyatt Regency Hotel in Santa Clara, California. This in-depth technical conference is the microprocessor industry's premier event, covering the latest processors, memory, and IP cores used in deep learning, embedded, communications, automotive, IoT, and server designs. The program includes more than 20 technical presentations by experts from the companies leading the industry, including these keynotes:

Day One, April 11:

"How Well Does Your Processor Support AI?"

Linley Gwennap, Principal Analyst, The Linley Group

Day Two, April 12:

"Big Data and Edge Processing Trends Point to RISC-V"

Martin Fink, EVP and CTO, Western Digital

Sponsor exhibits and reception:

April 11, 4:40 p.m. to 6:10 p.m.

This is a great networking opportunity with sponsors, exhibits, a raffle from the Linley Group, heavy hors d'oeuvres and an open bar.

Register now!

http://www.linleygroup.com/events/register.php?num=44

Admission is free to pre-qualified attendees who register on-line by April 5. Qualified attendees include system designers, chip designers, software designers, OEM/ODMs, service providers, press, and the financial community. The fee for other attendees is $795 for registrations received on or before April 5.

To see the full program and details, visit our web site: http://www.linleygroup.com/SPC18

Thanks to our sponsors: Synopsys, Arm, Micron, Rambus, GlobalFoundries, NetSpeed, ArterisIP, Cadence, CEVA, Inside Secure, SiFive, UltraSoc, Videantis, Wave Computing, and MIPI Alliance.


IC Economics Limit Performance

By Linley Gwennap

Remember when transistors were free? In the heyday of Moore's Law, chip designers counted on each new process node to double their transistor budget. More recently, that bounty faded as rising wafer-processing complexity all but eliminated transistor-cost gains. Designers must therefore scale back their next-generation designs to fit the cost constraints of their target markets. This trend is already slowing progress in processor performance and features, especially in cost-focused markets.

PCs are a good example. In a mature market with price pressure from end customers, Intel has hunkered down for the past several years, continuing to offer quad-core processors with modest CPU-architecture improvements. Last year, Coffee Lake bucked the trend with a six-core configuration, but even including that design, we estimate Intel's relative die area (normalized for process technology) is rising less than 10% per year, causing the actual die area to decrease over time. A similar trend appears in midrange smartphone processors, which have been stuck at eight Cortex-A53 CPUs for the past few years.

For companies lucky enough to have high-margin products or customers desperate to pay for performance increases, the right strategy is still to stay on the leading-edge node and cram more and more transistors into their designs. High-end smartphone processors, for example, have been among the first products to jump to new process nodes. Similarly, Nvidia and Intel continue to pack more cores into their high-end data-center products. But most chip vendors are more judicious in adding new transistor-hungry features. One result of this slower growth is an ongoing decline in die sizes.

Microprocessor Report subscribers can access the full article:

http://www.linleygroup.com/mpr/article.php?id=11950

Marvell Doubles PAM4 PHY Density

By Bob Wheeler

Marvell is sampling its first 50Gbps PAM4 PHY, which doubles the port density of Broadcom and Inphi chips. Targeting systems rather than optical modules, the 88X7120 arrives at the same time as the first switch chips to support the new 50Gbps, 200Gbps, and 400Gbps Ethernet standards. It handles 16x50GbE ports or 2x400GbE ports as well as intermediate speeds. The design is also versatile, offering retiming as well as multiple gearbox and reverse-gearbox modes for both PAM4 and NRZ interfaces.

The 88X7120 implements 16 bidirectional channels on both the host and line sides. Like Marvell's 25Gbps retimers, it provides long-reach serdes on the host side in addition to the line side. These serdes comply with the 802.3cd 50GBase-CR/KR specifications for PAM4 operation, and they support an NRZ mode compliant with 100GBase-CR4/KR4. The company claims the 7120 meets the 802.3cd standard for single-lane 50Gbps Ethernet. The chip also handles 802.3bs specifications for 100Gbps, 200Gbps, and 400Gbps Ethernet.

Between the host- and line-side interfaces, the 7120 includes a pair of PCS+FEC (physical coding sublayer + forward error correction) blocks for gearboxing between PAM4- and NRZ-based standards. It offers both 2:1 host-to-line gearboxing and 1:2 reverse gearboxing as well as 16:16 retiming, which handles 2x400GbE, 4x200GbE, 8x100GbE, and 16x50GbE ports. It provides a repeater mode that bypasses the PCS and FEC functions to minimize latency. The chip includes an MDIO control interface, eye monitoring for all serdes channels, and clock recovery for SyncE.

Although Marvell trails Broadcom and Inphi in bringing PAM4 PHYs to market, the first-generation designs were inside optical modules rather than on line-card and switch PCBs. The 88X7120 is sampling at the same time as the first switch chips integrating PAM4 serdes, which will create both retiming and reverse-gearbox opportunities. Given its leading density, OEMs should evaluate this new alternative.

Microprocessor Report subscribers can access the full article:

http://www.linleygroup.com/mpr/article.php?id=11949

Nvidia Shares Its Deep Learning

By Mike Demler

Designers no longer need to worry about the costs of deep-learning acceleration: Nvidia is making the technology available for free. The company has extracted the deep-learning accelerator (NVDLA) from its Xavier autonomous-driving processor and is offering it for use under a royalty-free open-source license. It's managing the NVDLA project as a directed community, which it supports with comprehensive documentation and instructions. Users can also download NVDLA hardware and software components from GitHub. Nvidia delivers the NVDLA core as synthesizable Verilog RTL code, along with a step-by-step SoC-integrator manual, a run-time engine, and a software manual.

The company's strategy in creating the open-source project is to foster more-widespread adoption of neural-network inference engines. It expects to thereby benefit from greater demand for its expensive GPU-based training platforms. Most neural-network developers train their models on Nvidia GPUs, and many use the Cuda deep-neural-network (cuDNN) library and software-development kit (SDK) to run models built in Caffe2, Pytorch, TensorFlow, and other popular frameworks.

The NVDLA is configurable for uses ranging from tiny IoT devices to image-processing inference engines in self-driving cars, but Nvidia's first RTL release is the "full" model, which is similar to the unit in Xavier. It includes 2,048 INT8 multiply-accumulators (MACs), but they're configurable at run time as 1,024 INT16 or FP16 units. In a 16nm design optimized to run the ResNet-50 neural network, the full model processes 269 frames per second (fps) and consumes 291mW on average.

Next quarter, the company plans to offer early access to a small NVDLA version that integrates 64 fixed-configuration INT8 MACs. This design can process 7fps on ResNet-50, but it consumes just 17mW (average). These full and small models are just two end-point examples of the accelerator's configurability, and designers are free to fine-tune the architecture.

Microprocessor Report subscribers can access the full article:

http://www.linleygroup.com/mpr/article.php?id=11953

NXP Pushes i.MX8M Mini to 14nm

By Tom R. Halfhill

NXP is expanding its i.MX media-processor family with lower-cost i.MX8M Mini processors. Some of these newcomers will supersede existing i.MX6 products that are popular in consumer electronics and other embedded systems. These are also the company's first chips manufactured in Samsung's 14nm FinFET technology.

Little wonder that NXP is devoting more attention to i.MX. For the i.MX6 family alone, it claims more than 3,500 customers (including those won through distribution). Responding to demand for lower-cost products, the company recently announced the i.MX8M Mini lineup. Sporting one to four ARM Cortex-A53 CPUs plus a Cortex-M4F microcontroller core, these processors are scheduled to sample in 2Q18 and reach production by the end of the year. They also integrate GPU cores for 2D and 3D graphics, and some have video acceleration.

The i.MX processors integrate a smorgasbord of features to address a variety of applications. Even these economical Minis are well appointed. With their GPUs and Cortex-M4F coprocessor -- which can serve as a sensor hub -- they can enable a human/machine interface (HMI) on nearly any commercial, industrial, or consumer device. The Mini models with a video engine can encode and decode video streams for videoconferencing, surveillance cameras, and machine-vision inspection systems. Using the Cortex-A53s or Cortex-M4F for audio processing, they're suited to many digital-audio applications, including surround-sound systems, networked speakers, sound bars, audio/video receivers, and public-address systems. NXP also targets voice control and voice assistants, providing reference designs for consumer products and for noisy industrial environments.

To broaden their appeal for IoT devices, the company has disclosed plans to stack radio modules on some i.MX chips. The first such product will be based on the i.MX 6ULL application processor, but future products may use the i.MX8M Mini.

Microprocessor Report subscribers can access the full article:

http://www.linleygroup.com/mpr/article.php?id=11951

NXP Refits i.MX and Kinetis for IoT

By Tom R. Halfhill

To expand its reach into the growing IoT market, NXP will stack radio modules on some i.MX application processors and is sampling more-powerful Kinetis microcontrollers with fully integrated radios and new security features. These different approaches to wireless integration will give customers more choices while easing NXP's design challenges.

Instead of building analog and digital functions on a monolithic i.MX die or copackaging two die, NXP chose a third solution: package-on-package (PoP) integration. Some i.MX packages will expose ball contacts on top as well as on the bottom, so the radio module can ride piggyback.

The company is starting with the i.MX 6ULL, an ultra-low-power application processor that has a Cortex-A7 CPU. It operates at up to 900MHz. A third-party partner (Murata) makes the first wireless module, which has 802.11ac Wi-Fi and Bluetooth radios. This combo is scheduled to begin production in 4Q18.

For smaller IoT devices, NXP unveiled the K32W0x MCUs -- the next-generation wireless chips in the Kinetis family. They integrate CPUs, memory, and radios on a monolithic die but are less powerful than the i.MX 6ULL and support slower wireless standards. These chips have a 72MHz Cortex-M4F plus a 72MHz Cortex-M0+ for sensor processing and running the wireless stacks. They include up to 1.25MB of flash memory and up to 384KB of SRAM -- the industry's largest configuration for a wireless MCU. The radios support Bluetooth 5, Zigbee 3.0, Thread (802.15.4), and Apple's HomeKit. These chips are sampling now and are scheduled for production in late 3Q18.

Microprocessor Report subscribers can access the full article:

http://www.linleygroup.com/mpr/article.php?id=11952

About Linley Newsletter

Linley Newsletter is a free electronic newsletter that reports and analyzes advances in microprocessors, networking chips, and mobile-communications chips. It is published by The Linley Group and consolidates our previous electronic newsletters: Processor Watch, Linley Wire, and Linley on Mobile. To subscribe, please visit:

http://www.linleygroup.com/newsletters/newsletter_subscribe.php

Domain: Electronics
Category: Semiconductors

Recent Newsletters

Linley Newsletter: August 8, 2019

Linley Newsletter Please feel free to forward this to your colleagues Issue #664 August 8, 2019 Independent Analysis of Microprocessors and the Semiconductor Industry E

08 August, 2019

Linley Newsletter: August 1, 2019

Linley Newsletter Please feel free to forward this to your colleagues Issue #663 August 1, 2019 Independent Analysis of Microprocessors and the Semiconductor Industry E

01 August, 2019

Linley Newsletter: July 25, 2019

Linley Newsletter Please feel free to forward this to your colleagues Issue #662 July 25, 2019 Independent Analysis of Microprocessors and the Semiconductor Industry

25 July, 2019