Q&A – Electronics World https://www.electronicsworld.co.uk Electronic Engineering and Design Tue, 05 Jul 2022 10:10:59 +0000 en-GB hourly 1 https://www.electronicsworld.co.uk/wp-content/uploads/2019/02/cropped-ew-logo-square3-32x32.png Q&A – Electronics World https://www.electronicsworld.co.uk 32 32 Q&A with Martin Frederiksen, Recab, and Christian Eder, Congatec, and about the impact of the new COM HPC standard https://www.electronicsworld.co.uk/qa-with-martin-frederiksen-recab-and-christian-eder-congatec-and-about-the-impact-of-the-new-com-hpc-standard/29480/ Fri, 16 Oct 2020 08:57:03 +0000 https://www.electronicsworld.co.uk/?p=29480 COM-HPC is the soon-to-be-released PICMG standard for high-performance Computer-on-Modules (COMs). The pinout and majority of the functionality were recently officially approved, with the physical footprints and the pinout approved in November 2019, which allows companies involved in the definition of the specification to introduce their first products shortly after the standard’s official ratification. The final PICMG ratification of the COM-HPC specification is scheduled for this autumn.

“As a result of the digital transformation, demand for embedded computers to provide high-speed performance is growing. To serve the new class of embedded edge servers, scaleability must be limitless. With its 440 pins, COM Express does not have enough interfaces for powerful edge servers. The performance of the COM Express connector is also slowly approaching its limits. While COM Express can easily handle the 8GHz clock speed and 8Gbit/s throughput of PCIe Gen 3, the verdict is still out regarding whether the connector meets certain technological advances such as PCIe Gen 4,” states PICMG in response to why there’s a need for a new specification to complement COM Express.

We are now already seeing the first commercial products based on COM-HPC.

1Christian Eder PICMG Q&A with Martin Frederiksen, Recab, and Christian Eder, Congatec, and about the impact of the new COM HPC standard
Christian Eder, Chairman of the PICMG COM-HPC technical subcommittee and Marketing Manager at Congatec

Q: Can you specify the differences between COM-HPC and COM Express?

A – Eder: The new COM-HPC computer-on-module systems offer considerably higher transmission performance, more high-speed interfaces and much faster network connection, among other benefits. This is because the new module has been redesigned and made more powerful. The COM-HPC specification provides 800 pins, doubling the maximum number of PCIe lanes from 32 for COM Express Type 7 to 64 for COM-HPC/Server. COM Express supports a maximum of PCIe Gen 3.0 with 8Gb/s per lane, whereas a COM-HPC module achieves up to 32Gb/s per lane via PCIe-5.0, four times the data rate of COM Express.

Q: In terms of Ethernet connectivity, what performance enhancements will COM-HPC modules offer?

A – Eder: “The enormous speed increase has an immense effect on the connectivity performance. Current COM Express modules (Type 7) at edge server level offer a maximum of 10 Gb Ethernet per signal pair. COM-HPC, on the other hand, specifies 25 Gb Ethernet, and more.

With up to eight network connections, it then becomes possible to achieve transfer rates of 100Gbit/s, and theoretically even 200Gbit/s. Such rates are needed in the first instance for high-performance edge server solutions at the edge of telecom networks. Here, fast up, down and crosslinks in all directions must be established: i.e. north in the direction of the central cloud; east and west in the direction of neighbouring edge fogs; and also south in the direction of industry 4.0 controls at process level.

Q:  Are COM-HPC modules likely to replace the COM Express ones?

A – Eder: The COM-HPC standard will by no means replace the current COM Express standard. Instead, both specifications are likely to exist in parallel for many years to come, depending on the field of application and its requirements.

What we will see is COM-HPC modules being used in particularly performance-hungry applications, for instance to embed artificial intelligence with deep learning in systems, or even to implement tactile Internet at the edge server level.

1Martin Frederiksen MD at DPIE Recab Q&A with Martin Frederiksen, Recab, and Christian Eder, Congatec, and about the impact of the new COM HPC standard
Martin Frederiksen, Managing Director of embedded computing systems provider Recab, UK

Q: Where will the greatest design challenges be in adopting COM-HPC modules?

A – Frederiksen: The higher data transfer speed of up to 100Gbit/s and 65 PCIe lanes means that the baseboard design becomes really tricky; careful routing and the right equipment will be required to ensure good signal integrity. In addition, these signals will not travel as far as previous versions.

There are ways to mitigate this: PCIe Retimer chips can extend signal strength if the trace length requirements are long, or the PCB material can be carefully selected. For many years, the base material of the PCB has been FR-4, which has been sufficient for the data transmission required for COM Express modules with PCIe-3.0. With COM-HPC, achieving the best performance will require changing to a base material with lower transmission loss, such as Megtron-4 laminates, for example.

Another big challenge using COM-HPC modules will be thermal design. These systems will feature processors operating in excess of 100W in a compact footprint, so an effective means of cooling these processors will be imperative to minimise the risk of overheating and component degradation within embedded systems.

Q: Should designers, then, expect significant changes to board design?

A – Frederiksen: Adaptations to baseboard design will certainly pay off in the boundless system potential of the standard. Bringing high-performance computing out of the server room and into the embedded market will open up new opportunities in areas such as artificial intelligence, 5G networking and machine vision. For example, the capacity for high data throughput and low latency will benefit applications such as 3D imaging in medical equipment and 3D mapping of land for rail infrastructure and the seabed. Overall, this means a new generation of embedded computing applications for a wide range of vertical markets, such as telecommunications.

In particular, telecoms and broadcasting were the first areas to quickly adopt COM Express Type 7, where there’s the need for 24/7 operation, high data throughput and low latency. The COM-HPC will be used where you cannot use a temperature-cooled data centre, such as edge cloud applications.

Faster telecommunications in transport will also need these modules. In this sector, this will allow closer to real-time communication of data between on-board or remote embedded systems and centralised, manned computers.

Q: What module classes are planned for COM-HPC?

A – Eder: There are two module classes for COM-HPC that address different application and performance requirements. In addition, there are two different form factors within these two module classes, similar to COM Express Basic and COM Express Compact. To be precise, we currently distinguish between server and client modules, in analogy to client/server computing.

COM-HPC/Server modules are tailored for use in edge server environments and require the largest possible memory capacity, a particularly powerful network connection and the option to provide many cores for consolidating high workloads. These Server-on-Modules will host the mentioned eight DIMM sockets on a 200x160mm footprint, while the smaller 160x160mm server modules will integrate up to four DIMM sockets.

The COM-HPC/Client modules have a slightly more compact design, are also planned in two footprint variants — 120x120mm and 160x120mm — and are designed for use in high-end embedded computing applications. Unlike the server modules, they provide a maximum of 2x GbE interfaces (via NBASE-T) for Ethernet connection. In addition, COM-HPC/Client modules integrate video interfaces such as DDI and eDP/MIPI-DSI, which – in contrast to COM-HPC/Server modules – can be used to control up to four independent high-resolution displays.

]]>
Q&A with Rob Kräwinkel, Lead Electrical Engineer, Solar Team Twente 2019, who discusses the optimisations his team implemented in its 2019 solar-powered race car https://www.electronicsworld.co.uk/qa-with-rob-krawinkel-lead-electrical-engineer-solar-team-twente-2019-who-discusses-the-optimisations-his-team-implemented-in-its-2019-solar-powered-race-car-red-e-which-raced-in-the-world-solar/27133/ Mon, 03 Aug 2020 08:39:57 +0000 https://www.electronicsworld.co.uk/?p=27133 Q:        Is there one race of solar-powered cars that you would not miss?

A:        The ultimate challenge for pure solar-powered race cars is the biennial Bridgestone World Solar Challenge, on a 3,000km route from Darwin in the north of Australia to Adelaide in the south. Teams of engineers and drivers compete to finish the race in the shortest possible time, in specially-designed cars that have only one power source: sunlight!

Q:        How do you ensure you stay ahead of the competition in the race?

A:        To win the World Solar Challenge, a race car must generate as much solar energy as possible, and convert the electricity it generates as efficiently as possible into mechanical power delivered to the wheels. At the same time, it must keep energy losses to a minimum: race teams pay minute attention to aerodynamic design to keep wind resistance to a minimum.

The basic design of all cars in the race is similar: an aerodynamic wing shape covered in arrays of photovoltaic panels to convert the sun’s light into electric power, which is fed directly to a motor driving the wheels, with any excess stored in a small on-board battery.

The most highly-placed teams are those that can best optimise the various elements of the car’s design – the aerodynamics, the power generation system, the motor and the traction system. Race strategy also plays an important part: the driver must move as fast as possible, but not so fast that the car’s battery runs out of power when the car is not in bright sunlight.

Q:        What’s different about Solar Team Twente’s car?

A:        Solar Team Twente has been using self-developed motors in its cars since 2013, which have been surpassing the efficiency of off-the-shelf models. For the 2019 World Solar Challenge, the team decided to seek an additional performance advantage by also abandoning the use of commercial inverters, which are used by most of our competitors, and to design a more robust and efficient version from scratch ourselves.

The team’s race strategy also called for precise regulation of the battery’s state-of-charge. Every team’s goal is to finish the race with zero energy left in the battery, to maximise total energy usage and thus achieve the highest possible speed for the longest possible distance without running out of power. The more accurate the state-of-charge measurement, the more confidently the race team can set the optimum speed of the car’s cruise control, taking into account the weather, battery capacity, competitors performance, and other factors.

Q:        What are the most important parts of the solar-powered race car?

A:        There are four important electrical systems in a solar race car: the array of solar panel generators; the battery and its battery management system; the inverter (motor drive), which converts the solar panels’ direct current output to a three-phase alternating current drawn by the motor; and the motor itself.

The car’s designers aim for better than 99% efficiency in the various electrical power conversion circuits. We were looking for that extra 1% efficiency to give us an edge over other race cars. When you are already at efficiency better than 95%, eliminating any remaining losses is really hard to do. You have to be able to look at the tiny deviations in voltage or current in great detail, which requires an accurate and sensitive power measurement system.

The same requirement for measurement accuracy applies to the battery management system: even tenths of a percentage point of extra accuracy in state-of-charge measurement can make a difference between winning and losing.

Q:        How did optimise the car’s power?

A:        RED E’s engineering design team needed to repeatedly perform power (concurrent voltage/current) measurements to measure the efficiency of the motor system under various operating conditions. The development process involved multiple iterations of the system design, each one validated to assess the effect on efficiency. The benchmark for the team’s design was the off-the-shelf motor that it, and other teams, had used previously.

To validate the accuracy of the fuel gauge, the car’s on-board sensor measures the battery’s state of charge, the current flowing into the battery (from the solar panels) and flowing out of it to the motor. By subtracting output from input, it can calculate the residual charge in the battery. This required extremely accurate continuous measurement of current flows.

We also have to measure the power output of the motor system, including the inverter and the motor itself, at a range of power input values, to refine the design and incrementally improve its efficiency. This called for extremely accurate power analysis at a high sampling frequency.

To achieve the accuracy and precision required for system design and validation, the RED E team chose the WT5000 Power Analyser from Yokogawa.

Q:        How did the WT5000 help?

A:        The current-sensing circuit we had previously used was already better than 99% accuracy, but we were looking for better than that – it was only when we analysed the sensor with the WT5000 that we were able to compensate for the offset in the current sensor, and so configure the measurement outputs to achieve optimal accuracy. That crucial extra confidence in our state-of-charge measurements can give the driver a vital extra 1-2km range at a given speed that we would not otherwise have been sure of getting from the battery.

The WT5000 is a very nice instrument to use, it’s intuitive and easy to navigate around the controls. It’s also easy to tweak the display so that it shows exactly the measurement outputs you are interested in, like when it clearly demonstrated that our new inverter design outperformed the equivalent off-the-shelf model by a significant margin.

Q:        What’s next for Solar Team Twente’s race car?

A:        We are currently very busy with getting together a new team of 20 enthusiastic students to once again build the world’s most efficient solar-powered car. Of course in the background, we are still keeping a close eye on all developments in the industry to be able to give the new team a kickstart as soon as they start in September of this year. The beginning of a new team automatically results in my team attaining the title of “old team member”. In which case will we do our very best to support the new team through the same processes we went through, and together push Solar Team Twente further as a whole. We will aim to build the best possible solar car once again. There will be focus on improving our custom motor controller even further but a lot of our developments are also dependent on the regulations of the challenge. Which have not been released yet.

]]>
Q&A with Dr Dominic Binks, VP Technology, Audio Analytic, who discusses the decision to embed sound-recognition AI software on the Arm Cortex-M0+ processor https://www.electronicsworld.co.uk/qa-with-dr-dominic-binks-vp-technology-audio-analytic-who-discusses-the-decision-to-embed-sound-recognition-ai-software-on-the-arm-cortex-m0-processor/25005/ Thu, 11 Jun 2020 13:34:03 +0000 https://www.electronicsworld.co.uk/?p=25005 Q:        What is ai3, and what type of neural network does it use?

A:        At the heart of our ai3 software is our optimised deep neural network for modelling the acoustic and temporal features of sounds, called AuditoryNET. We keep the type of network confidential.

Q:        Why did Audio Analytic choose to run sound-recognition AI on the Arm Cortex-M0+ chip?

A:        Our mission is to give all machines a sense of hearing, even the smallest of devices where power and processing are constrained. Plus, right across the consumer technology sector, there’s a drive to get more AI running at the edge of a network: consumer privacy can be better protected and it’s a more cost-effective option as cloud infrastructure is expensive.

Leading tech consumer brands also want edge-based machine learning (ML) to be as compact as possible, to give product designers maximum freedom. Running our embedded sound-recognition software, ai3, on the most-constrained end-point demonstrates how compact it can be – especially since the Arm-Cortex M0+ is one of the smallest designs available today.

There’s a movement today called tinyML, where the embedded ultra-low power systems and machine-learning communities collaborate, whereas traditionally they operate independently. Qualcomm, Google, Arm and others are all keen advocates of it, building a niche tech community with regular meetings and conferences, like the ‘tinyML Summit’ in California that took place in February. Our M0+ work takes these tinyML innovations to the next level. Rather than tinyML in action, it is microML in action. As well as significantly pushing the boundaries of what’s currently believed to be possible with tinyML, efforts like ours widen the range, size and variety of machines that can have hearing.

Q:        What’s the size of ai3 software footprint?

A:        For this M0+ implementation, our ai3 software required 181kB across the RAM and ROM, which is considerably less than the available 224kB of ROM and RAM on the M0+-based chip that we used from NXP. To give you a broader picture, a full implementation of our ai3 software detecting multiple sounds requires tens of MIPs and a few hundred kB of memory, although there are often additional acceleration techniques available, which can further reduce the footprint and computational demands.

From the outset our technology was designed to work at the edge, this impacts on our data collection and labelling, through to model training, evaluation and compression. We also designed the architecture of our software to be flexible when working on highly constrained devices.

All devices have constraints in one form or another, whether it’s battery life, processing capabilities, BoM, or competing functions. These finite restrictions are present whether you are looking to deploy software on a tiny chip like the M0+, or fitting alongside many other applications on a larger processor running on a smartphone. Hardware and software spaces are very competitive, so Audio Analytic designed ai3 to run on minimal memory and computational resources, which means it can run on a dedicated microprocessor or alongside other applications on a larger applications processor. We focused on being purely edge-based without any processing in the cloud, which better protects consumer privacy since audio data is not streamed off the device for analysis. Privacy-friendly is a compelling point for consumers, so for product designers this means flexibility and software compactness are crucial.

Designing a smart speaker that is plugged in to run AI sound-recognition differs widely from running the same technology on much smaller battery-powered true wireless earbuds, for example. To be able to meet our targets for compactness, models must have the flexibility to be small and optimised for the end-user application. And overcoming this M0+ challenge proves sound-recognition AI can feature on many consumer electronics devices.

Q:        How difficult were the challenges in embedding ai3 into M0+?

A:        As ai3 is effectively a signal-processing application, so running it on the Cortex M4 is an obvious choice. To fit on the M0+, the key challenges we faced were around the small instruction set architecture, no floating-point support and a small amount of RAM.

The M0+ design uses the Armv6-M architecture. This small instruction set and lack of hardware support means mathematical calculations are more labour-intensive and the compiler injects specific replacement routines that take longer to compute. As there’s no support for floating-point calculations in the M0+, more tasks had to be programmed into the software. These M0+ chips are also designed for devices with very limited processing needs, hence have very limited RAM that made it tricky for developing and debugging.

The Arm-Cortex M4 is a really useful comparison point to illustrate the challenge we faced. The M4 core has instructions that map naturally to operations that ML algorithms do. With the M0+ there’s less support for these operations, resulting in a significant increase in instruction count on like-for-like computations. As the calculations are much more labour-intensive, we’ve typically seen over five times the number of instructions needed on M0+ compared with the same task on the M4.

Whilst the M4 uses floating-point, we’ve always supported ML in both fixed and floating-point. For the M0+ project, we relied on our fixed-point implementation. As a result, tasks carried out by hardware are programmed into the software, and issues like scaling, rounding, underflow and overflow all had to be taken into consideration. This tends to use more MIPS, which means extra effort is required to complete the same task.

Finally, the amount of Flash available on the M0+ can be tight. To address this, we found the right chip with sufficient Flash, and chose to work with the NXP evaluation board FRDM-KL82Z EVK.

Q:        What changes did you have to make to your designs?

A:        The architecture of ai3 is flexible and scaleable so we didn’t really need to change much of the code – just disabling of floating-point evaluation, for example, since it wasn’t useful. The existing code actually ran within the constraints of the platform, but it wasn’t as efficient on the M0+, and we wanted the end result at production-ready standards. As a result, we did some processor-specific optimisations to create the headroom we needed.

Selecting the NXP evaluation board was a key decision because, whilst it’s based around the Arm-Cortex M0+ design, it has sufficient RAM for debugging and developing. This also gave us headroom whilst we were tweaking and optimising sections of the code.

]]>
Q&A with with Dunstan Power, Director, ByteSnap Design, on what makes a successful product design https://www.electronicsworld.co.uk/qa-with-with-dunstan-power-director-bytesnap-design-on-what-makes-a-successful-product-design/22172/ Thu, 16 Apr 2020 13:27:34 +0000 https://www.electronicsworld.co.uk/?p=22172 Q: Do successful products rely on technical ability, or business acumen?

A: Successful product design usually begins with people who know their market very well already; they know what is already out there, what the pros and cons of the competition are, and who their target customers are. Those that jump straight into a new market without prior knowledge tend to encounter more barriers that push costs up and deadlines out.

Interestingly, the reasons for failure are rarely technical – there’s usually a solution that works. However, the key thing is to get the right mix between business and technical requirements.

Q: What are the biggest pitfalls in product design?

A: Without doubt the biggest problem is underestimation. Nearly every project makes this mistake to some degree – from completely underestimating the market as a whole, through to underestimating budgets or testing requirements. We see a lot of keen folk with a strong idea that have created a proof of concept with Arduino and a breadboard, and think they’re 60% there, but, in reality, that initial concept is really only 20% of the way through the design process in terms of time and cost. This is a common reason for single-product start-ups failing – they’ve raised what they think is enough capital, but they’ve underestimated the other parameters by a significant margin. This is where getting early expert consultation can really pay dividends and save time and money.

That ability to budget accurately at the outset is vital and requires good planning of the whole project – many companies forget that there are significant marketing costs to include, including shipping, following a completely successful final product design. There are also costs and logistics in owning long-term support, the need to handle obsolescence issues, as well as software updates to combat security vulnerabilities or third-party dependencies.

Other obvious issues we see time and time again are not planning ahead properly for compliance testing, which requires a clear-eyed assessment of which countries should be targeted. Compliance standards should be gathered early on, and it’s worth slimming down the list of countries if budget is limited – compliance testing can swallow up to 50% of the budget for truly global compliance. One example we had recently was a project that was going smoothly until it became apparent that it didn’t meet some US state fire regulations (each state has different regulations), so retro design fixes had to be done. However, knowing exactly which states were crucial markets, and collecting their regulations in advance, would have saved time and money.

Q: Is there a way to manage risk when designing a new product?

A: Of course, there are risks to any new product introduction, but there are ways to manage those risks. The most important step is to pull as much risk forward as you can to the beginning of product development. The fact is that most new products are pushing technical boundaries in some sense or other; there’s always an area that may or may not work out – these are the things to tackle upfront and early on – leave the box-ticking simpler stuff until later in the timeline.

In one example, we were working on a custom Android tablet that needed to deliver precisely-timed audio, using Wi-Fi. We were concerned about the robustness of Wi-Fi in a very busy and congested environment. By conducting a feasibility study at the outset, we were able to test and propose a method of using Wi-Fi combined with Bluetooth beacons to enable on-device audio synchronisation, solving this issue at the start.

Q: Should a product design remain in-house, outsourced or in partnership?

A: Being clear about your in-house strengths and weaknesses is a vital component of success, which we find most companies are relatively good at. It’s worth bearing in mind that the one-man-band might be the cheapest option to begin with, but we see a lot of companies that take this route initially run into difficulties when scaling up. It is also absolutely vital to carefully select your partners, with the right skills and attributes for your needs. Another way to think of it is that you don’t want to partner that very quickly becomes a bottleneck in the process – you do get what you pay for here.

Another particularly common problem is outsourcing to one of the many international manufacturing firms, who will deliver an initially sound product based on tweaks to an existing design. However, these firms then often own the IP, leaving the customer with no ability to take their design and business elsewhere, and then exploit this position by ramping costs excessively. It’s vital that contracts are checked to ensure the client owns the IP – this gives businesses flexibility if they want to take production in-house, for example.

Q: Are there ways to recover a project that is spiralling into failure?

A: One of our most popular services is the Design Rescue service, where we see a lot of variations on projects that have effectively ‘failed’ in their current form but can be turned around by addressing specific issues. Most commonly we see problems such as loss of IP, but there’s a huge variety of potential points of failure, including that technically-competent one-man-band retiring, as well as underestimations of budgets and time required to get a product through multiple revisions, testing and production successfully.

Q: In the last few years we’ve seen significant global shortages of capacitors and RAM, which has pushed prices up across the board – are there methods of mitigating these wider market threats to a product?

A: We’ve certainly had customers that have come to us with a working product but with a brief to get component costs down, and there are usually ways to do this. Especially around memory now, there are plenty of options to use different types of memory that can have a significant impact on cost.

Another client had 80 different pumps in the market, each with a different controller board, so by standardising the board across the range, the company made considerable cost-savings without having to compromise quality or operational effectiveness.

The big thing here is to work with the supply chain and contract your manufacturer at an early stage, don’t leave it until the last minute. Getting a draft bill of materials costed up as soon as possible is pretty essential to avoid unexpected costs.

]]>
Q&A with Sum Tsai, Vice President of Marketing and Sales, Ryder Industries, about the challenges EMS companies are facing now https://www.electronicsworld.co.uk/qa-with-sum-tsai-vice-president-of-marketing-and-sales-ryder-industries-about-the-challenges-ems-companies-are-facing-now/20003/ Mon, 02 Mar 2020 14:11:29 +0000 https://www.electronicsworld.co.uk/?p=20003 Q:        How is the electronics manufacturing services (EMC) industry shaping up at the moment?

A:        With challenges from online retail, branded products are experiencing a very dynamic business environment. Branded companies are more willing to outsource the heavy lifting to EMS so that they can focus on brand image and reputation, product roadmap, innovation, sales and distribution channels, and sales and service issues. EMS should be able to take care of the rest, e.g. prototype build, proof of concept, product design, cost improvement, supply chain management, manufacturability, NPI, and more.

In this respect, some traditional, older companies (like GE) may lose their inherent advantages.

Q:        How are current changes in the industry affecting Ryder’s business?

A:        Ryder has a solid foundation and customer base in the audio industry, both in production and design know-how, which helps our customers to be competitive in both product innovation and cost. The company has also been aligned with the advancement of the Internet of Things (IoT) and Machine-to-Machine (M2M) connectivity. M2M is going to drive huge growth in the IoT business, and Ryder is gearing up to support it – in terms of reference designs, ready-to-go modules, skills, test equipment and manufacturing processes.

We also aim to develop more business in the health and wellness industries, which may help people to increase awareness in the areas of nutrition, beauty and personal care, mental wellness, pain relief and of course their overall health.

Also, Ryder’s interest and experience in integrating sustainability into its manufacturing sites and processes through using green methodology and practices, will bring Ryder to the next stage of substantial business growth.

There are two core business units in Ryder, and in the next three to five years we will most likely build another two core business units with comparative business. We will also structure our new global business development network to prepare for another new core business for Ryder.

Ryder runs its business efficiently, especially with regards to operations management. Some corporations develop a bulky hierarchy, losing flexibility and effectiveness. However, Ryder has managed to avoid that.

Q:        We’ve heard of Ryder’s “Swiss Precision: Chinese Scale” approach to business. What does this mean?

A:        Ryder Industries is a Swiss-owned company, with deep experience in Chinese manufacturing. It has some 40 years of operational history, originally as an Original Equipment Manufacturer (OEM) and subsequently as an Electronics Manufacturing Service provider (EMS).

Chinese EMS is no longer a simple, cost-driven industry, but a design, quality and efficiency driven industry. “Swiss Precision : Chinese Scale” perfectly represents this trend, and Ryder has been working this way for over 40 years. This is also exactly what our customers are looking for, and we look forward to sharing our unique methods with more customers as we grow our business.

]]>
Q&A with Jean-Pierre Petit, Director of Digital Manufacturing, Capgemini, who presents the manufacturing trends for 2020 https://www.electronicsworld.co.uk/qa-with-jean-pierre-petit-director-of-digital-manufacturing-capgemini-who-presents-his-manufacturing-trends-for-2020/18897/ Fri, 07 Feb 2020 14:45:11 +0000 https://www.electronicsworld.co.uk/?p=18897 Q: What role will disruptive technologies, such as AI and IoT, play in 2020? 

A: Heading into the 2020s, manufacturing companies are rapidly harnessing the unlocked potential of artificial intelligence (AI) and the Internet of Things (IoT). Having run successful experimentations, these organisations have realised the potential of these technologies to optimise efficiencies and boost revenue.

A key factor that we are seeing become more prevalent, is that more projects being deployed are driven by return on investment (ROI). This is because manufacturers have settled on AI and IOT solutions that are guaranteed to work, for now, this is typically across the most basic of operational functions. However, there is still a huge amount of unallocated investment within the industry, as manufacturers are prioritizing projects likely to return investment within the relatively short term (i.e. next 12-18 months) while deploying at scale is still a challenge.

Q: How has IoT enabled smart factories? 

A: The introduction of IoT has been key to the development of data analysis capabilities. IoT can help provide insights on processes, costs, productivity, while simultaneously looking at the supply chain as a whole – the quality of parts and products being used, where they came from, and how they were grown, bought, or created.

As deployments and successful experimentation continue to mature and evolve, manufacturers will have access to huge vaults of data that will inform best practices and efficiencies that had not been previously been considered. With this insight, organisations will be able to leverage the data for predictive analytics. This is set to help companies better understand how their machines work and how materials and energies are used, allowing them to better prepare for future issues. 

Q: How can organisations scale smart factories?

A: To survive in the changing market, manufacturers will need to deploy these successful experimentations at scale. Capgemini’s research found that, while today, one in three factories has been transformed into a smart facility, manufacturers plan to create 40% more smart factories in the coming five years. Some organisations view the scalability of smart factories as a roadblock to progress.

In order to operate a smart facility on a larger scale, organisations will need to ensure IT/OT convergence is able to support digital continuity and allow for better collaboration. In addition to digital talent, a wide variety of skills will be necessary to drive smart factory transformation. This includes prioritizing multi-disciplined profiles – such as engineering-manufacturing, manufacturing-maintenance and safety-security. Soft skills, such as problem solving and collaboration, will also remain a key priority. 

Q: Will cybersecurity be a concern for organisations, who are turning to smart factories? 

A: In today’s connected world, it is necessary to store confidential and sensitive information on the cloud and on the edge. With an increasing number of manufacturers turning to smart facilities, securing the factory’s network is more important than ever. By not securing their end-to-end infrastructure sufficiently, manufacturers risk becoming victims of cybercrime. This could result in the loss, or theft, of data, industrial espionage, or general disruption to operating systems. Ensuring the soundness of these systems will be a top priority for organizations in 2020, particularly as they scale.

Q: Will consumers’ sustainability concerns affect manufacturing processes? 

A: Consumers are becoming more environmentally conscious than ever, including where and how their products are made, with an increased focus on the sustainability and ethics behind what they buy. In the coming year, in addition to brands, manufacturers will be held accountable for the environmental impact of the product created and resources used. By leveraging IoT and AI, manufacturers need to future proof themselves by approaching their operations with transparency, when it comes to public opinion, regulatory compliance and the environmental impact of their operations.
 
Globally speaking, regardless of industry, the key priorities for any manufacturer are quality, flexibility and agility. There is increasing awareness around traceability and responsibility, meaning manufacturers will soon be held to a higher ethical expectation, one that they are perhaps unprepared for, a challenge that will need to be overcome, while still maintaining the same high standard of product and services quality. 

]]>
Q&A with Sophie Charpentier, the Graphene Flagship Business Developer for Electronics, who predicts the functionalities of the future smartphones https://www.electronicsworld.co.uk/qa-with-sophie-charpentier-the-graphene-flagship-business-developer-for-electronics-who-predicts-the-functionalities-of-the-future-smartphones/18260/ Wed, 29 Jan 2020 14:15:36 +0000 https://www.electronicsworld.co.uk/?p=18260 Q:        How do you see the mobile phone changing in the near future?

A:        Remember predictive texting? It was a huge improvement from the multi-tap approach. Mobile phones have come a long way since then and advancements in the industry are not slowing down.

Some say we have already witnessed the fastest rate of change in mobile devices, but, arguably, the best is yet to come. While today’s smartphones boast highly responsive touch displays, and high-definition cameras and facial recognition, there is plenty of room for further improvement.

With conversational interfaces such as Google Assistant and Amazon’s Alexa, spoken instruction is changing the way people use their devices. But perhaps the next step is mind control. Before ceasing its activities, Facebook Building 8 division was developing technology that enables people to communicate with their devices through their minds, at a speed of 100 words per minute. This is roughly five times faster compared to typing on a touch display. Without using touch or voice, this kind of technology could allow users to simply think what they want the phone to do – from opening an app to playing video or sending a message.

Considering interfaces like these, the future smartphone model will look vastly different. Wearables and vision overlays could play are greater role; through futuristic glasses, contact lenses and headsets, augmented reality (AR) technology will allow users to control all functions with their eyes and minds. 

Q:        How will these technology-laden devices be powered?

A:        Battery life is one area of mobile phones that seems to have taken a backward step, especially now that function-packed phones require more power. The days of charging a Nokia 3310 on a weekly basis are long gone, with phones today working harder than ever before. With continuous use, the average phone lasts just about ten hours and takes around two hours to fully charge.

What if you could charge a battery in five minutes, though? Graphene Flagship partners Thales and M-SOLV, together with researchers from IIT, Italy, are currently making this a reality with graphene. The highly conductive properties of graphene have enabled the companies to develop high-power supercapacitors. Primarily these will be used in the aeronautical and space sector, providing energy storage with high speed charge-and-discharge cycles. As devices charge much faster than conventional batteries, this could benefit the mobile phone industry too, enabling to fully charge smartphones in mere minutes.

In the future, phones could also be charged over the air. This is a step up from wireless charging pads, with a company called Energous developing the technology. If a phone is within three feet of a transmitter, it will start charging automatically. As the technology improves and charging distances extend, phones may be kept constantly charged without wires or sockets.  

Phone case technology is also complementing these efforts. The NanoCase for the latest iPhone contains a graphene panel that quickly dissipates the phone’s excess heat, which, the developers claim, extends battery life by up to 20%. 

Also, some manufacturers use these heat-dissipating properties in phones as part of super-cooling systems.

Q:        How is the vision aspect of the mobile phone likely to change?

A:        Imagine being in a supermarket, holding up your camera and establishing which fruit is the freshest. Or, in a more extreme example, the camera could be used for driving in dangerously dense fog by providing augmented outlines of road users and obstacles on the windscreen. This idea is well on its way to becoming a reality.

A new spectroscopy device is opening the door for regular people to use technology that was once only available in laboratories. Developed by Graphene Flagship Partner ICFO, Spain, this device is built into a smartphone camera, enabling it to see better than the human eye. The device could be used to identify everything, even counterfeit drugs and harmful substances, within seconds. 

The spectrometer is enabled by graphene, a recurring material in several mobile research projects.

Q:        Can the future mobile networks sustain the enormous data exchange these futuristic mobile devices will generate?

A:        We can’t mention future phones without mentioning future mobile networks. Compared to today’s networks, which primarily use 4G and 3G technology, 5G is set to be far faster and more reliable, with greater capacity and lower response times. And, 6G is already in development.

To enable high speeds, graphene shows unique potential to exceed bandwidth demands of future telecommunications. Graphene enables ultra-wide bandwidth communications; coupled with low power consumption it will radically change the way data is transmitted across optical communications systems.

]]>
Q&A with Rogier Reinders, Global Marketing Director, Electronics & Advanced Assembly, Dow Consumer Solutions, who discusses the importance of modern silicone materials in electronics design https://www.electronicsworld.co.uk/qa-with-rogier-reinders-global-marketing-director-electronics-advanced-assembly-dow-consumer-solutions-who-discusses-the-importance-of-modern-silicone-materials-in-electronics-design/16375/ Thu, 12 Dec 2019 09:20:16 +0000 https://www.electronicsworld.co.uk/?p=16375 Q: Are advanced silicones important to electronics applications?

A: As a class of materials, silicones offer significant, proven benefits, including hydro-thermal stability and resistance to most chemicals, viscosities and mechanical strengths that minimise mechanical stresses on electronic components, good protection against shock and vibration, long pot life and non-toxicity, among others.

Building on these desirable properties, advanced silicone adhesives, sealants, optical bonding materials, conformal coatings and encapsulants deliver specialised performance tailored to the needs of today’s electronic devices. For example, they can offer adequate thermal conductivity or electromagnetic interference (EMI) shielding that have become critical to smaller and more numerous electronic components. Optically-clear silicone materials for bonding automotive and consumer displays can provide high transmittance, low haze, minimal yellowing and superior reliability.

Q: Silicone materials can be found in many applications, but are there any specific electronics market segments that present the greatest growth potential?

A: We see strong demand for advanced silicone solutions in communications, transportation and consumer electronics.

In communications, 5G wireless networks with higher power densities, the Internet of Things (IoT) connecting millions of smart devices and Big Data used in analytics are exponentially increasing the amount and rate of data transfers. For reliability, components in these systems need protection from EMI that can disrupt circuits and from damaging heat generated by high-speed, high-volume operations. Silicone-based conductive materials that offer electrical conductivity or EMI shielding are helping to enable the next generation of data communications.

In transportation, the growth of electric vehicles and the proliferation of automotive driver-assistance sensors require solutions to meet safety and reliability requirements whilst achieving efficient throughput in mass production. Thanks to their versatility and performance-enhancing capabilities, silicones can be found in virtually every area of automotive system assemblies, from powertrains and engine sealants to electronic control modules, novel lighting systems and safety features. For example, silicone materials for EMI shielding can play a key role in ensuring reliable performance over the product lifetime of critical applications like radar, cameras and electronic control units (ECUs).

And, in the highly competitive and fast-paced consumer electronics sector, products like smartphones and foldable tablets require rapid, efficient assembly. They also need solutions for ease of rework, reduced energy usage and management of high temperatures and radiation generated by smaller form factors and increased functionality, all met by advanced silicone materials.

Q: When it comes to materials, what are the current challenges electronic component designers are facing?

A: Several technology trends are prompting designers to move beyond standard silicone materials. First, as devices become thinner and smaller, part miniaturisation and tighter packaging of sensitive components are increasing the need for improved thermal management. Thermally-conductive silicones can work with heat sinks to improve heat dissipation. They are available in many forms, including uncured greases, curable thick-layer applicable gap fillers, thermally-conductive adhesives, gels and encapsulants.

Another challenge that silicones can address is reliability. As electronics – particularly consumer devices – become more expensive to purchase and repair, dependable performance and extended useful life are especially important to brand reputation and customer satisfaction. This reliability mandate also applies to advanced driver assistance systems (ADAS) in vehicles, as consumers often pay a premium to obtain such safety features.

Another concern is production speed. When a highly anticipated new device comes on the market, customer demand can surge overnight. Manufacturers need solutions that enable fast throughput to capitalise on this demand without delays. Silicones can shorten production times through accelerated curing, low-temperature curing, primerless bonding and avoiding the need for oven curing.

Q: Do silicone materials meet sustainability requirements in electronics designs?

A: Silicones can play a critical role in increasing the sustainability of components and helping manufacturers comply with environmental regulations. These materials are also helping to support mainstream adoption of electric and autonomous vehicles, which reduce or avoid fossil fuel consumption and carbon emissions. In addition, silicones, including UV-cure grades, are frequently solventless, making them the material of choice where emerging regulations impose complex and costly special handling and processing requirements. Innovative material suppliers also offer more-sustainable solvent-based options to meet customer needs.

Q: Electronics manufacturers are always looking for ways to reduce costs. How do silicones contribute to cost control?

A: There are several ways that high-performance silicones can help reduce manufacturing costs, whether by replacing more-expensive materials, increasing productivity, eliminating secondary operations or reducing overhead:

  • Replacing pre-cured materials that are often more expensive;
  • Avoiding the need for pre-treating, thanks to primerless adhesion to a wide range of substrates, including glass, plastics and metals;
  • Delivering high flow rates for efficient filling, dispensing and self-leveling;
  • Rapid curing with or without added heat to shorten cycle times and reduce energy for curing in production;
  • Enabling rework with long open times;
  • Reducing weight and operational costs by eliminating mechanical fasteners;
  • Offering opportunities to reduce waste and scrap rate of electronic devices, and address warranty issues, thanks to the stability of silicone materials.
]]>
Q&A with Tony Bibbs, President, GForge, who discusses the current state of project and collaboration software https://www.electronicsworld.co.uk/qa-with-tony-bibbs-president-gforge-who-discusses-the-current-state-of-project-and-collaboration-software/15617/ Mon, 25 Nov 2019 11:11:03 +0000 https://www.electronicsworld.co.uk/?p=15617 Q:       What do you see in the collaboration software space today?

A:        The collaboration space has no shortage of options: Today’s solutions come in different flavours of software as a service (SaaS), on-premises or hybrid, all promising that a few mouse clicks will help you collaborate better. However, the one attribute most of them have in common is they don’t all deliver what they promise. In fact, many of these solutions actually make collaboration worse.

Q:       What are the most common problems with today’s collaboration solutions?

A:        When business grows so do its needs. Although transition happens slowly, before you know it, you’ve accrued many individual solutions, each addressing only a single task. Even worse, navigating between all those tools becomes painful, as it often does. In the best case, these features add more buttons to an already-complicated user interface; in the worst, there are many more bookmarks to get to the specific features. 

Lack of a comprehensive feature-set makes portfolio management difficult, if not impossible. Some solutions focus on work (tickets, issues, tasks), some on the process (kanbans, CI/CD integration), and others on people (chat). But, what about the bigger picture? How many projects do we have in flight and what’s their relative health? Have we spread our valued team members too thin? How do I find quickly what I’m looking for? Can I easily and successfully search for what I need (projects, users, tickets, documents)? Centralised searching isn’t something you can do without but will require buying yet another tool.

Then, there are the projects themselves: not all are created equal! In a world where organisations have dozens or even hundreds of projects in various phases of development, support and retirement, it’s important to be able to scale features up or down, without having to buy more seats or new solutions.

Then, there’s the ‘SaaS/Cloud versus on-premises’ discussion. There’s no shortage of on-premises solutions, yet many require painful, complex installation and upgrade processes. Given the critical role collaboration solutions play, getting them up and running (and keeping them up to date) needs to be easy. Many of these solutions can’t be even installed without an Internet connection, which means installing a collaboration solution on your super-secure network will be difficult, if not impossible.

And, once you are up and running, how do you control access to your projects?

Access control varies greatly between collaboration solutions. Large projects often have large teams, with technical, management and stakeholder members, each playing a role in successful delivery. Believe it or not, some collaboration solutions don’t allow you to define your own roles, instead imposing a set of roles, often giving users access to either too many or too few features.

Roles are key in any real collaboration solution and are often reusable, specifying the level of access users have. And even if you can specify roles on your project, if you’ve been upsold you may well be stuck having to manage access to each upsold feature separately. This is where the tools start to run the team. What started out as only a simple solution soon includes a wiki, chat, help desk and next thing you know, you are looking at a bunch of tools, held together with duct tape and web hooks, none being the authoritative source of your precious project data, and all individually imposing different ways of working.

Q:       And what about these solutions’ user interfaces – have they evolved successfully?

A:        When it comes to user interfaces, today’s solutions are all over the place. Geek-centric solutions might make your IT teams happy but could alienate your managers – from projects to products and the upper levels. Some solutions open up more work for team members so that management can have pretty reports; other solutions are too enterprise-like, trying to be everything to everyone, but adding to the complexity. Tools’ lack of focus makes the user experience painful – with too many links, buttons and tables, all competing for attention.

Q:       How should businesses approach collaborative solutions?

A:        A common problem with many collaboration solutions is that their base functionality has a high price tag, and yet offer limited scope, implementing only a few well-thought-out features.

With collaboration solutions playing a key role in “getting things done”, the more you use them the more valuable they become. So, what happens when you get to a point when you want to make to change of how you collaborate? For example, there are a few reasons an organisation may want to move from SaaS to on-premises or vice versa and although not common it shouldn’t be impossible. Moves like this should not only be possible but relatively easy to do.

Then, there’s the “vendor lock-in”; you should never get into a vendor relationship that you can’t easily get out of. The upsell models make switching out solutions even more expensive, time-consuming and error-prone. Worse yet, if you have independent vendor solutions for a specific task, then those integrations will break, requiring more time to keep them in sync.

Fortunately, it isn’t all doom-and-gloom when it comes to collaboration software. Be aware that a solution that is right for you now may not be able to grow with you in the future. To that end, it’s important to understand where many of today’s systems fall short; make choices that balance where you are today and where you want to go.

]]>
Q&A with Travis Witteveen, CEO, Avira Antivirus, who discusses the challenges faced by the security industry https://www.electronicsworld.co.uk/qa-with-travis-witteveen-ceo-avira-antivirus-who-discusses-the-challenges-faced-by-the-security-industry/14863/ Thu, 07 Nov 2019 14:05:10 +0000 https://www.electronicsworld.co.uk/?p=14863

Q:        What are the largest threats for Internet users today?

A:        Malware continues to be the largest threat Internet users are facing today. We see the main challenges to include the exponentially increasing volume of threats, the equally exponentially-increasing number of potentially vulnerable devices in user’s homes, and, in turn, the ways to enable users to enjoy and use the power of the Internet to its fullest without exposing them to the complexity of the solutions required.

Fortunately, we see a significant increase in complementary services based on protecting users’ privacy – among them password managers and VPNs.

Q:       Do you find antivirus and security companies are taking sufficient measures to protect users?

A:        Protecting users’ security and privacy requires companies to address a very complex problem in an easy-to-manage solutions, and the costs are rising. The market is naturally growing very fast. It’d be wrong for companies not to invest in quality and security, but sadly it happens all the time. Only too often we see users relying only on a certain subset of applications to keep them safe, rather than a full suite or complete combination of products, such as anti-virus, software-updaters, VPNs, password managers, registry and trace cleaners, privacy settings managers, and so on.

Q:        There have been recent instances of attacks on antivirus companies; is this a growing trend?

A:        Any company that has a large user base – whether an operating system vendor (Microsoft, Apple, Google) or software applications vendor (Adobe, Avast, Avira, etc.) – is constantly under threat and attack from hackers.

For protection, it’s important to find a partner you trust. Measurement of that trust is based on the speed with which your partner detects an attack and resolves it, the transparency around handling the threat and how this may have affected external parties, potential negligence by the vendor and follow-up support in preventing a similar attack in the future.

Q:       There were also recent instances of wireless security cameras being remotely activated and manipulated by hackers. Is this trend that might continue, too?

A:        The challenge with many low-cost Internet-connected devices is that their vendors invest in them only so much, and the limited time-window when they will fix issues before the next version is released. This makes the older devices unprotected after a certain period of time.

We approach this challenge by putting technology on the network routers similar to that of PCs. The partnership with TP-Link enables us to constantly monitor the health, security and privacy issues of all the devices a home might have.

Q:        Is the industry likely to ever eliminate hacker and malware attacks?

A:        Unfortunately, no. Hackers will continue to attack our online lives, but what’s changing is the goals of the attackers, and the types of devices that are vulnerable. We see many more risks with smart IoT devices today. In addition, hackers are shifting from traditional malware attacks on an individual device to more sophisticated attacks targeting users’ activity data.

Interview prepared by Justinas Baltrusaitis, PreciseSecurity.com

]]>
Q&A with Benoit Jouffrey, VP, 5G Expertise at Thales, who discusses 5G security https://www.electronicsworld.co.uk/qa-with-benoit-jouffrey-vp-5g-expertise-at-thales-who-discusses-5g-security/13884/ Thu, 03 Oct 2019 08:32:53 +0000 https://www.electronicsworld.co.uk/?p=13884 Q:       What are the security challenges arising from the wider adoption of 5G?

A:        According to recent research from Ericsson, 5G will reach 40% population coverage and 1.5 billion subscriptions worldwide by 2024, making it the fastest mobile generation ever to be rolled out on a global scale. However, as history has taught us time and time again, any fast growth technology innovation creates new cyber security risks.

With billions of devices connected to the Internet, we face an increased risk of cyberattacks, data privacy breaches and even state sponsored attacks. If we don’t get the security right, there’s a risk of undermining trust in the new wave of connected devices and the concept of the smart city and smart industry at large.

Hence, the three main security challenges we face are:

–           Data protection compliance: while GDPR has shaped global data protection protocol, it will soon be accompanied by an even tougher framework called the ePrivacy Regulation (ePR). ePR will be enacted towards the end of 2019 and into 2020 and will require the pseudonymisation and encryption of personal data as standard.

–           Increased attack surface: 5G is transforming the key mobile and cloud functions of a network, bringing new security threats with it. The risks and attack methods once associated with high-level IT will now be brought to mobile networks. It’s therefore critical that the 5G ecosystem – comprised of MNOs, policymakers, third-party vendors and manufacturers – is prepared.

–           Cyber warfare: Cyber is no longer the warfare of the future, but of the present. Attacks are getting increasingly sophisticated and nationalised cyber warfare is beginning to target all ICT networks, including mobile telecoms. If the 5G network is compromised, it could bring cities and communication networks to a standstill.

Q:       Which industries will be most affected?

A:        The first stage of adoption of 5G is now happening in the consumer market with enhanced mobile broadband. The next stage will be the wider use of the technology for ultra-reliable low latency and massive machine type communications. This will have a profound impact on the industrial world. In the automotive industry, for example, 5G will further enable autonomous driving and vehicle-to-vehicle or vehicle-to-infrastructure connectivity.

In manufacturing, thanks to the very low latency and high reliability, 5G will play a key role in work automation, turning the smart factories vision into reality, while in healthcare it could facilitate remote telesurgery and patient monitoring. However, one of the main areas that will be affected is smart cities, where 5G will play a key role in facilitating the deployment of smart transportation networks, smart buildings and enabling further smart metering. This will drive changes in the way we design technology for connected devices, including the need to embed ad hoc security, as this will be crucial for the new breed of smart technologies to take off.

Q:       What will be the impact of EU’s GDPR and other similar regulations worldwide on how data is managed on 5G networks?

A: EU’s GDPR is shaping data protection globally and we are seeing similar initiatives emerging in some of the world’s leading economies, including the US, Canada, Japan, China, Brazil and South Africa amongst others.

As the world is moving towards tightening data protection requirements to ensure user privacy, there will be a stronger focus on how data is being managed on 5G networks. As set by the ePR, this includes tougher rules for managing electronic communications metadata as it permits the identification of a device on a network. As a result, identifiers such as SUPI (Subscription Permanent Identifier), the equivalent of the IMSI for 5G mobile networks, need to meet strict encryption and data storage requirements. To meet these requirements, connected devices must be designed with cybersecurity in mind.

Q:       Can companies ensure data privacy for individuals and devices on 5G networks?

A:        Businesses and device manufacturers need to adopt a security strategy that is based on five key principles. Firstly, security mechanisms need to be adjusted to the potential risk that a breach could present. Not all uses require the same level of security – a sensor in a soap dispenser doesn’t need the same level of protection as the lock of a connected vehicle, for instance.

The second principle requires thinking of security as an end-to-end approach, from the edge to the core. This means that security must be built into devices and software at the design level, otherwise vulnerabilities can only be “patched” afterwards. For example, using secure 5G SIM cards, which offer full anonymisation of end-to-end subscriber identities, is critical for ensuring robust protection against hacking and future security threats. Businesses should also consider secure device end-point connectivity modules that provide additional layers of security beyond the connectivity itself.

The third principle is using of state-of-the-art encryption, key and data storage, which eliminates the risk of misusing personal information and helps ensure compliance with regulations such as GDPR.

The fourth principle is that organisations need to work with partners who can audit the targeted deployment and help meet key security certifications to ensure all third-party components used for the final product meet the highest security standards.

And last but not least, the uptake of 5G will depend on industry-wide standardisation that can help reduce the fragmentation in the market and ensure all participants adhere to proven security and data privacy principles together with the right level of interoperability. These standards may need to continue to evolve over time in order to adapt to emerging technologies and market developments and ensure they remain relevant in the long-term.

Q:       What are the opportunities for businesses looking to roll out 5G-based services and for electronic manufacturers?

A:        5G offers great potential to transform traditional industries and create new opportunities for service innovation: from smart factories, through to autonomous vehicles, remote VR training and rapid video streaming. But again, behind all these innovations surfaces the need to ensure the security and reliability of 5G technology and the network infrastructure that underpins it.

To make the most of 5G, electronic equipment manufacturers and other key market players need to identify their targeted infrastructure that will support their strategic goals, in a safe and reliable way. This infrastructure could rely on network slicing or private networks. Working with an established and trusted partner that has the expertise to assess the required end-to-end security for 5G networks and devices will be key to ensuring cyber security is embedded at the heart of their 5G deployments.

In the end, the success will be based on a team-play: Governments, standardisation bodies, MNOs, device manufacturers and software or application providers must work together if they are to be successful in building a trusted IoT ecosystem that truly delivers on the promise of 5G.

]]>
Q&A with Dr. Thomas Cameron, Director of Wireless Technology, Analog Devices, who discusses the 5G communications technology https://www.electronicsworld.co.uk/qa-with-dr-thomas-cameron-director-of-wireless-technology-analog-devices-who-discusses-the-5g-communications-technology/13631/ Tue, 24 Sep 2019 13:33:27 +0000 https://www.electronicsworld.co.uk/?p=13631

Q:                    What is the current status of 5G communications?

A:                    The 5G cycle is well underway, with many field trials completed and many others in progress, globally. In the 5G Trial Snapshot Report from the GSA, some 330 separate 5G trials and demonstrations have been identified around the world to date, with over 130 mobile operators announcing 5G trials in 62 countries. Whilst many of these trials focus on demonstrating higher throughput, 5G introduces flexibility and new features that enable new applications and sets the stage for a wireless standard that will carry us into 2030 and beyond.

Q:                    What about the adoption of 5G communications, its growth and future development?

A:                    Looking forward, we see no slowing in the generation and consumption of mobile data as video sharing becomes pervasive throughout our society. But the future of connectivity is also about connecting to the world around us as we enter the coming age of machines. We are on the doorstep of an era of digital transformation that will profoundly change the way we live, work and move about daily. Whilst the current smartphone is an interface between humans and information, future devices will actively communicate with each other independent of human interaction. They will monitor the environment around us through a dense network of connected sensors and make active decisions based on that. At the core of this coming digital transformation are highly capable mobile networks connecting everyone and everything with high reliability and low latency.

At the end of 2017, the 3GPP published the first 5G NR specification (Release 15). While this non-standalone specification is the first of many steps in achieving 5G, this enabled SoC vendors to move forward with modems to support the 2019 availability of 5G handsets. Then another milestone was announced by the 3GPP on the completion of the 5G NR standalone specification, which will enable independent deployment of 5G NR networks. While the spectrum of choice varies by region, it is expected that 5G deployments will commercially launch in 2020 and consumers will begin to experience the first benefits of the 5G technology. We expect that 5G massive MIMO will leverage mid-band spectrum in many regions, followed by millimeter wave deployments as this technology matures.

Q:        Is 5G enabling new technological developments?

A:        While we as engineers tend to focus on the emerging specifications such as bandwidth, latency and such, one of the foundations of 5G is flexibility. If we observe how the specifications are forming, we see the waveforms being defined to enable a range of uses currently envisioned, with provisions for some not even yet imagined.

At a high level, 5G is motivated by the desire to enable three major applications:

  • Enhance mobile broadband (eMBB);
  • Massive machine type connectivity (mMTC);
  • Ultra-reliable low-latency communications (uRLLC).

Currently, much of the industry 5G focus is on enhanced mobile broadband, driving toward high network capacity and higher throughput that uses beamforming techniques in the mid- and high-band spectrum. We are also beginning to see new applications emerge, such as industrial automation, that leverage the low latency features of the 5G network architecture.

 

Q:        How can the industry continue to support 5G technology in the future?

A:        Enhanced mobile broadband drives a need for higher data throughput and higher network capacity. Cellular base-station capacity can be increased through three major initiatives: acquiring new spectrum, increasing base station density and improving spectral efficiency. While we continue to see new spectrum made available for mobile use globally, and network density increasing though the addition of small cells, there remains a much-needed improvement in the utilisation of available spectrum.

In recent years, massive MIMO has emerged as a technology that can provide significant improvements in spectral efficiency. It has been demonstrated to provide 3-5 times improvement in mobile data throughput with promise for further improvements.

Massive MIMO is based on the use of many active antenna elements that can be adapted in a coherent manner to accurately deliver a signal to an intended user in space, whilst controlling the interference to other users. The large number of antennae combined with signal processing algorithms enable the systems to essentially take frequency re-use to the micro scale. This introduces a new factor in the frequency re-use equation whereby space is now used to enable the base station to simultaneously deliver independent data streams to multiple users at the same time and in the same spectrum. This results in a large improvement in spectral efficiency, which in turn results in greatly improved throughput for the cell. Figure 1 shows such a system. The antenna physically appears as a panel, on which many radiators (antenna elements) are mounted. Behind each radiator is a radio signal chain.

203704 Fig 01 Q&A with Dr. Thomas Cameron, Director of Wireless Technology, Analog Devices, who discusses the 5G communications technology

Figure 1: The massive MIMO concept

Q:        What is the current status of the massive MIMO technology?

A:        Massive MIMO trials have been completed by many mobile operators globally and initial commercial deployment of this technology are expected to commence in the 2019-2020 timeframe by early adopters to address the most congested areas in their networks.

Going forward, as massive MIMO technology evolves and new features are added in the 3GPP wireless standards, we would expect technology to propagate throughout mobile networks globally.

Q:        Does technology bring challenges to the engineering community?

A:        In massive-MIMO systems we add many more radio channels to the system scaling from typical 8T8R (eight transmitters, eight receivers) TDD (time division duplex) radio head to a system of 64T64R. Whilst the massive-MIMO systems provide much improvement in base-station capacity, this improvement comes at the cost of higher complexity in the radio head.

Historical radio deployments use passive antenna enclosures fed over cables by remote radio heads. The massive-MIMO physical structure is based on an active antenna architecture whereby the active radio signal chains are now embedded within the antenna assembly. Given that these radio systems are typically tower- or pole-mounted, there are limitations on the allowed size and weight of the active antenna system. Whilst the antenna size is dictated by the antenna element spacing, DC power consumption is also a key consideration affecting the weight of the system.

There are many technical challenges for the radio designer to achieve the required radio performance within the size, weight and power consumption limits.

Q:        What is RadioVerse wireless technology?

A:        RadioVerse technology is the embodiment of how Analog Devices has applied a system-level approach to bring value to our customers. Our comprehensive bits-to-antenna product portfolio plus system-level expertise enables us to become more than a vendor – we become a partner with our customers to help solve their toughest problems. For example, by engaging and leveraging the RadioVerse ecosystem on our website, customers can rapidly move from concept to prototype all the way through to production.

The AD9375 small-cell reference design is another good example of what you can find in the RadioVerse ecosystem. The reference design shown in Figure 2 includes all the components necessary for the small cell radio, from the SERDES interface right up to the antenna. The design is suitable for indoor small cells with 2T2R 250mW output power per antenna. All radio components are on the board, including the AD9375 with DPD, high-efficiency PAs, LNAs, filters and a power solution. The power consumption is less than 10W and it comes in a very small form factor, sitting comfortably in your hand.

203704 Fig 03 Q&A with Dr. Thomas Cameron, Director of Wireless Technology, Analog Devices, who discusses the 5G communications technology

Figure 2: RadioVerse reference design

]]>