Posts

EQS with unique MBUX Hyperscreen: the big in-car cinema: An assistant for the driver and front passenger who is constantly learning, thanks to artificial intelligence

intelligencintelligence

Visually impressive, radically easy to operate and extremely eager to learn: the MBUX Hyperscreen is one of the highlights in the EQS.  It represents the emotional intelligence of the all-electric upper-class model: The large, curved screen unit stretches almost the entire width from the left to the right A-pillar. In addition to its sheer size, the high-quality, detail-loving design also provides a “wow” effect. This aesthetic high-tech look is the emotional dimension of the MBUX hyperscreen. Added to this is artificial intelligence (AI): With software capable of learning, the display and operating concept adapts completely to its user and makes personalised suggestions for numerous infotainment, comfort and vehicle functions. Thanks to the so-called zero layer, the user does not have to scroll through submenus or give voice commands. The most important applications are always offered in a situational and contextual way at the top level in view. In this way, numerous operating steps are taken away from the EQS driver. And not only him: The MBUX Hyperscreen is also an attentive assistant for the passenger. It receives its own display and operating area.

MBUX (Mercedes-Benz User Experience) has radically simplified the operation of a Mercedes-Benz. Unveiled in 2018 in the current A-Class, there are now more than 1.8 million Mercedes-Benz passenger cars equipped with it on the roads worldwide. The Van division is also relying on MBUX. A few months ago the second generation of this learn-capable system debuted in the new S-Class. The next big step now follows in the form of the new EQS and the optionally available MBUX Hyperscreen.

“With our MBUX Hyperscreen, a design vision becomes reality” says Gorden Wagener, Chief Design Officer Daimler Group. “We merge technology with design in a fascinating way that offers the customer unprecedented ease of use. We love simplicity, we have reached a new level of MBUX.”

“The MBUX Hyperscreen is both the brain and nervous system of the car”, says Sajjad Khan, Member of the Board of Management of Mercedes-Benz AG and CTO. “The MBUX Hyperscreen continually gets to know the customer better and delivers a tailored, personalised infotainment and operating offering before the occupant even has to click or scroll anywhere.”

Electrifying appearance with emotional visualization

The MBUX Hyperscreen is an example of digital/analogue design fusion: several displays appear to blend seamlessly, resulting in an impressive, curved screen band. Analogue air vents are integrated into this large digital surface to connect the digital and physical world.

The MBUX Hyperscreen is surrounded by a continuous plastic front frame. Its visible part is painted in an elaborate three-layer process in “Silver Shadow”. This coating system achieves a particularly high-quality surface impression due to extremely thin intermediate layers. The integrated ambient lighting installed in the lower part of the MBUX Hyperscreen makes the display unit appear to float on the instrument panel.

The passenger also has its own display and operating area, which makes travel more pleasant and entertaining. With up to seven profiles, it is possible to customize the content. However, the entertainment functions of the passenger display are only available during the journey within the framework of the country-specific legal regulations. If the passenger seat is not occupied, the screen becomes a digital decorative part. In this case, animated stars, i.e. the Mercedes-Benz Pattern, are displayed.

For a particularly brilliant image, OLED technology is used in central and passenger displays. This is where the individual image points are self-luminous; non-controlled image pixels remain switched off, which means that they appear deep black. The active OLED pixels, on the other hand, radiate with high color brilliance, resulting in high contrast values, regardless of the angle of view and the lighting conditions.

This electrifying display appearance goes hand in hand with emotionally appealing visualisation. All the graphics are styled in a new blue/orange colour scheme throughout. The classic cockpit display with two circular instruments has been reinterpreted with a digital laser sword in a glass lens.

Thanks to its clear screen design with anchor points, the MBUX Hyperscreen is intuitive and easy to operate. An example of this is the display style EV mode. Important functions of the electric drive such as boost or recuperation are visualized in a new way, with a spatially moving clasp, and thus made tangible. A lens-shaped object moves between these clamps. It follows gravity and thus depicts the G-Force forces impressively and emotionally.

Personalised suggestions with the aid of artificial intelligence

Infotainment systems offer numerous and comprehensive functions. Several operating steps are often required to control them. In order to further reduce these interaction steps, Mercedes-Benz has developed a user interface with context-sensitive awareness with the help of artificial intelligence.

The MBUX system proactively displays the right functions at the right time for the user, supported by artificial intelligence (see below for examples). The context-sensitive awareness is constantly optimised by changes in the surroundings and user behaviour. The so-called zero-layer provides the user at the top level of the MBUX information architecture with dynamic, aggregated content from the entire MBUX system and related services.

Mercedes-Benz has investigated the usage behaviour of the first MBUX generation. Most of the use cases fall in the Navigation, Radio/Media and Telephony categories the navigation application is therefore always at the center of the screen unit with full functionality.

Over 20 further functions – from the active massage programme through the birthday reminder, to the suggestion for the to-do list – are automatically offered with the aid of artificial intelligence when they are relevant to the customer. “Magic Modules” is the in-house name the developers have given to these suggestion modules, which are shown on the zero-layer.

Here are four use cases. The user can accept or reject the respective suggestion with just one click:

  • If you always call a certain friend on the way home on Tuesday evenings, you will be asked to make a corresponding call on that day of the week and at this time of day. A business card with its contact information appears, and – if stored – its image appears. All MBUX suggestions are linked to the user’s profile. If someone else drives the EQS on a Tuesday night, this recommendation is not made – or there is another, depending on the preferences of the other user.
  • If the EQS driver regularly uses the massage function according to the hot stone principle in winter, the system learns and automatically suggests the comfort function in wintry temperatures.
  • If the user regularly switches on the heating of the steering wheel and other surfaces for seat heating, for example, this is suggested to him as soon as he presses the seat heating.
  • The chassis of the EQS can be lifted to provide more ground clearance. A useful function for steep garage entrances or sleep policemen. MBUX remembers the GPS position at which the user made use of the “Vehicle Lift-Up” function. If the vehicle approaches the GPS position again, MBUX independently proposes to lift the EQS.

Interesting facts & figures

With the MBUX Hyperscreen, several displays appear to merge seamlessly, resulting in an impressive 141-centimetre wide and curved screen band. The area that passengers can experience is 2,432.11 cm2.

The large glass cover display is curved three-dimensionally in the moulding process at temperatures of approx. 650°C. This process allows a distortion-free view of the display unit across the entire width of the vehicle, irrespective of the display cover radius.

To get to the most important applications, the user must scroll through 0 menu levels. That’s why Mercedes-Benz calls this zero layer.

There are a total of 12 actuators beneath the touchscreen for haptic feedback during operation. If the finger touches certain points there, they trigger a tangible vibration in the cover plate.

Two coatings of the cover plate reduce reflections and make cleaning easier. The curved glass itself consists of particularly scratch-resistant aluminium silicate.

The safety measures include predetermined breaking points alongside the side outlet openings as well as five holders which can yield in a targeted manner in a crash thanks to their honeycomb structure.

8 CPU cores, 24-gigabyte RAM and 46.4 GB per second RAM memory bandwidth are some of the MBUX technical specifications.

With the measurement data of a 1 multifunction camera and also 1 light sensor the brightness of the screen is adapted to the ambient conditions.

With up to seven profiles, the display section can be individualised for the front passenger.

Companies collaborate to make video analytics solutions more accessible in order to drive better business outcomes

Sony Semiconductor Solutions (Sony) and Microsoft Corp. (Microsoft) today announced they are partnering to create solutions that make AI-powered smart cameras and video analytics easier to access and deploy for their mutual customers.

As a result of the partnership, the companies will embed Microsoft Azure AI capabilities on Sony’s intelligent vision sensor IMX500, which extracts useful information out of images in smart cameras and other devices. Sony will also create a smart camera managed app powered by Azure IoT and Cognitive AIServices that complements the IMX500 sensor and expands the range and capability of video analytics opportunities for enterprise customers. The combination of these two solutions will bring together Sony’s cutting-edge imaging and sensing technologies, including the unique functionality of high-speed edge AI proAIcessing, with Microsoft’s cloud expertise and AI platform to uncover new video analytics opportunities for customers and partners across a variety of industries.

“By linking Sony’s innovative imaging and sensing technology with Microsoft’s excellent cloud AI services, we will deliver a powerful and convenient platform to the smart camera market. Through this platform, we hope to support the creativity of our partners and contribute to overcoming challenges in various industries,” said Terushi Shimizu, Representative Director and President, Sony Semiconductor Solutions Corporation.

“Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses,” said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft. “Through this partnership, we’re combining Microsoft’s expertise in providing trusted, enterprise-grade AI and analytics solutions with Sony’s established leadership in the imaging sensors market to help uncover new opportunities for our mutual customers and partners.”

Video analytics has emerged as a way for enterprise customers across industries to uncover new revenue opportunities, streamline operations and solve challenges. For example, retailers can use smart cameras to detect when to refill products on a shelf or to better understand the optimal number of available open checkout counters according to the queue length. Additionally, a manufacturer might use a smart camera to identify hazards on its manufacturing floor in real time before injuries occur. Traditionally, however, such applications — which rely on gathering data distributed among many smart cameras across different sites like stores, warehouses and distribution centers — struggle to optimize the allocation of compute resources, resulting in cost or power consumption increases.

To address these challenges, Sony and Microsoft will partner to simplify access to computer vision solutions by embedding Azure AI technology from Microsoft into Sony’s intelligent vision sensor IMX500 as well as enabling partners to embed their own AI models. This integration will result in smarter, more advanced cameras for use in enterprise scenarios as well as a more efficient allocation of resources between the edge and the cloud to drive cost and power consumption efficiencies.

Sony’s smart camera managed app powered by Azure is targeted toward independent software vendors (ISVs) specializing in computer vision and video analytics solutions, as well as smart camera original equipment manufacturers (OEMs) aspiring to add value to their hardware offerings. The app will complement the IMX500 sensor and will serve as the foundation on which ISVs and OEMs can train AI models to create their own customer- and industry-specific video analytics and computer vision solutions that address enterprise customer demands. The app will simplify key workflows and take reasonable security measures designed to protect data privacy and security, allowing ISVs to spend less time on routine, low-value integration and provisioning work and more time on creating unique solutions to meet customers’ demands.

It will also enable enterprise customers to more easily find, train and deploy AI models for video analytics scenarios.

As part of the partnership, Microsoft and Sony will also work together to facilitate hands-on co-innovation with partners and enterprise customers in the areas of computer vision and video analytics as part of Microsoft’s AI & IoT Insider Labs program. Microsoft’s AI & IoT Insider Labs offer access and facilities to build, develop, prototype and test customer solutions, working in partnership with Microsoft experts and other solution providers like Sony. The companies will begin working with select customers within these co-innovation centers later this year.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

About Sony Semiconductor Solutions

Sony Semiconductor Solutions Corporation is the global leader in image sensors. We strive to provide advanced imaging technologies that bring greater convenience and joy to people’s lives. In addition, we also work to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit: https://www.sony-semicon.co.jp/e/.

Enabling High-Speed Edge AI Processing and Contributing to Building of Optimal Systems Linked with the Cloud

Sony’s Intelligent Vision Sensors IMX500 (left) and IMX501 (right)

 

Sony Corporation today announced the upcoming release of two models of intelligent vision sensors, the first image sensors in the world to be equipped with AI processing functionality*1. Including AI processing functionality on the image sensor itself enables high-speed edge AI processing and extraction of only the necessary data, which, when using cloud services, reduces data transmission latency, addresses privacy concerns, and reduces power consumption and communication costs.

*1 Among image sensors. According to Sony research (as of announcement on May 14, 2020).

These products expand the opportunities to develop AI-equipped cameras, enabling a diverse range of applications in the retail and industrial equipment industries and contributing to building optimal systems that link with the cloud.

Sony's Intelligent Vision Sensors IMX500 (left) and IMX501 (right)

Sony’s Intelligent Vision Sensors IMX500 (left) and IMX501 (right)

Model name Sample shipment date Sample price (excluding tax)
IMX500 1/2.3-type (7.857 mm diagonal) approx. 12.3 effective megapixel intelligent vision sensor (bare chip product) April 2020 10,000 JPY
IMX501 1/2.3-type (7.857 mm diagonal) approx. 12.3 effective megapixel intelligent vision sensor (package product) June 2020 (planned) 20,000 JPY

The spread of IoT has resulted in all types of devices being connected to the cloud, making commonplace the use of information processing systems where information obtained from such devices is processed via AI on the cloud. On the other hand, the increasing volume of information handled in the cloud poses various problems: increased data transmission latency hindering real-time information processing; security concerns from users associated with storing personally identifiable data in the cloud; and other issues such as the increased power consumption and communication costs cloud services entail.

The new sensor products feature a stacked configuration consisting of a pixel chip and logic chip. They are the world’s first image sensor to be equipped with AI image analysis and processing functionality on the logic chip. The signal acquired by the pixel chip is processed via AI on the sensor, eliminating the need for high-performance processors or external memory, enabling the development of edge AI systems. The sensor outputs metadata (semantic information belonging to image data) instead of image information, making for reduced data volume and addressing privacy concerns. Moreover, the AI capability makes it possible to deliver diverse functionality for versatile applications, such as real-time object tracking with high-speed AI processing. Different AI models can also be chosen by rewriting internal memory in accordance with user requirements or the conditions of the location where the system is being used.

Main Features

■ World’s first image sensor equipped with AI processing functionality

The pixel chip is back-illuminated and has approximately 12.3 effective megapixels for capturing information across a wide angle of view. In addition to the conventional image sensor operation circuit, the logic chip is equipped with Sony’s original DSP (Digital Signal Processor) dedicated to AI signal processing, and memory for the AI model. This configuration eliminates the need for high-performance processors or external memory, making it ideal for edge AI systems.

■ Metadata output

Signals acquired by the pixel chip are run through an ISP (Image Signal Processor) and AI processing is done in the process stage on the logic chip, and the extracted information is output as metadata, reducing the amount of data handled. Ensuring that image information is not output helps to reduce security risks and address privacy concerns. In addition to the image recorded by the conventional image sensor, users can select the data output format according to their needs and uses, including ISP format output images (YUV/RGB) and ROI (Region of Interest) specific area extract images.

■ High-speed AI processing

When a video is recorded using a conventional image sensor, it is necessary to send data for each individual output image frame for AI processing, resulting in increased data transmission and making it difficult to deliver real-time performance. The new sensor products from Sony perform ISP processing and high-speed AI processing (3.1 milliseconds processing for MobileNet V1*2) on the logic chip, completing the entire process in a single video frame. This design makes it possible to deliver high-precision, real-time tracking of objects while recording video.

*2 MobileNet V1: An image analysis AI model for object recognition on mobile devices.

■ Selectable AI model

Users can write the AI models of their choice to the embedded memory and can rewrite and update it according to its requirements or the conditions of the location where the system is being used. For example, when multiple cameras employing this product are installed in a retail location, a single type of camera can be used with versatility across different locations, circumstances, times, or purposes. When installed at the entrance to the facility it can be used to count the number of visitors entering the facility; when installed on the shelf of a store it can be used to detect stock shortages; when on the ceiling it can be used for heat mapping store visitors (detecting locations where many people gather), and the like. Furthermore, the AI model in a given camera can be rewritten from one used to detect heat maps to one for identifying consumer behavior, and so on.

Key Specifications

Model name IMX500 (bare chip product) IMX501 (package product)
Number of effective pixels 4056 (H) × 3040 (V), approx. 12.3 megapixels
Image size Diagonal 7.857 mm (1/2.3 type)
Unit cell size 1.55 μm (H) × 1.55 μm (V)
Frame rate Full pixel 60 fps
Video 4K (4056 × 2288) 60 fps
1080p 240 fps
Full/video+AI processing 30fps
Metadata output 30fps
Sensitivity (F5.6 standard value) Approx. 250LSB
Sensor saturation signal level (minimum value) Approx. 9610e-
Power supply Analog 2.7V
Digital 0.84V
Interface 1.8V
Main functions AI processing function, ISP, HDR shooting
Output MIPI D-PHY 1.2 (4 lane) / SPI
Color filter array Bayer array
Output format Image (Bayer RAW), ISP output (YUV/RGB), ROI, metadata
Package IMX500: – IMX501: Ceramic LGA 12.5 mm (H) × 15.0 mm (V)

About Sony Corporation

Sony Corporation is a creative entertainment company with a solid foundation of technology. From game and network services to music, pictures, electronics, semiconductors and financial services – Sony’s purpose is to fill the world with emotion through the power of creativity and technology. For more information, visit: http://www.sony.net/