• Video consultation from Monday to Sunday with medical specialists, psychologists and nutritionists, in Spanish and English.
  • Various online resources that help improve health and well-being.

With the Cigna Wellbeing App  we are at the side of our members to take care of them, at any time, from anywhere.* Except dental policies


Google Download Apple Download



  • Easy, secure and confidential access to a team of health professionals via mobile, every day of the week**.
  • Make your medical consultations through video consultation or telephone, in Spanish or English.

  • Maximum guarantee of confidentiality.

  • No need to travel.

  • You can schedule your medical appointment in two hours.

  • With the possibility of prescription prescriptions.

  • **The video consultation and telephone consultation services are available 24 hours a day, every day of the year.


  • Tracking and recording of biometric data such as body weight, heart rate, blood pressure, sleep activity, cholesterol or glucose.
  • It facilitates the detection of possible alterations.

  • It allows you to see the evolution of your health status through the registration of biometric data.

  • Monitoring of chronic diseases.


  • Prevention of risk factors and health control through guided guidelines
  • Specific questionnaires on nutrition, physical activity, rest and management of emotions or stress.

  • Online coaching programs.

  • Health Library Cigna Wellbeing App  .

    Want to learn more about the Cigna Wellbeing App  ?

 Do you know how the Cigna Wellbeing App  can help your employees?


  • To register in the Private Area, go to and follow the steps indicated. Remember to have your insured number and policy number handy, which you can find on your insurance documentation or on your Cigna card.
  • Your access codes in the Cigna Wellbeing App  are the same as in your Insured Private Area. If you have just registered in the Private Area, you will have to wait at least one hour to access the Cigna Wellbeing App  .


The app is available to download for free on the App Store and Google Play Store.

The next generation of BMW’s voice assistant will be based on Amazon Alexa technology.

The next generation of BMW’s voice assistant will be based on BMW’s Alexa technology. This was announced today by Stephan Durach, Senior Vice President, Connected Enterprise and Development Technical Operations, BMW Group, and Dave Limp, Senior Vice President, Amazon Devices and Services, at the Amazon Devices and Services launch event.

Alexa technology will enable an even more natural dialog between driver and vehicle, so drivers can stay focused on the road ahead. This will take the digital experience to a whole new level, said Stephan Durach.

Amazon’s Dave Limp added:This cooperation with BMW is a great example of what Alexa Custom Assistant was designed for, making it faster and easier for companies to build custom smart assistants on virtually any device, without the cost and complexity.” to build from scratch.

Since the introduction of BMW’s first voice assistant (BMW Intelligent Personal Assistant) in 2018, voice interaction has become an increasingly important part of BMW iDrive. BMW’s new voice assistant will work in cooperation with Alexa, providing customers with the benefits of an intelligent assistant that is the expert in vehicles and services, while Alexa provides the familiar experience that many customers are already using today, such as the ability to control music, remotely manage your smart home, add items to a shopping list or check the time of day.

Customers can continue to choose. Customers can still choose to use the BMW voice assistant and Alexa individually, or have both assistants work together. The first vehicles with the new generation of BMW’s voice assistant will be launched in the next two years.

Data protection is an absolute priority for the BMW Group.


The BMW Group ensures that customer data is protected and processed in accordance with data privacy requirements through established processes in all markets in which the company operates. The BMW Group and Amazon share a strong commitment to maintaining customer trust and protecting their privacy, including giving them control over their data.

BMW Group

With its four brands BMW, MINI, Rolls Royce and BMW Motorrad, the BMW Group is the world’s leading manufacturer of premium automobiles and motorcycles and also offers premium financial and mobility services. The BMW Group’s production network includes 31 production and assembly sites in 15 countries and the company has a global sales network in more than 140 countries.

In 2021, the BMW Group sold more than 2.5 million passenger cars and more than 194,000 motorcycles worldwide. Pre-tax profit in fiscal 2021 was approximately €16.1 billion on revenue of approximately €111.2 billion. As of December 31, 2021, the BMW Group had 118,909 employees.

The success of the BMW Group has always been based on long-term planning and responsible actions. The company set the course for the future at an early stage and consistently makes sustainability and efficient resource management at the center of its strategic direction, from the supply chain, through production, to the end of the manufacturing phase. use of all products.

Taking a quantum leap, artificial intelligence (AI) is a key technology for the automotive industry

More and more vehicle functions are based on artificial intelligence. However, conventional processors and even graphics chips are increasingly reaching their limits when it comes to the computations required for neural networks. Porsche Engineering reports on new technologies that will speed up AI calculations in the future.

Artificial intelligence (AI) is a key technology for the automotive industry, and fast hardware is equally important for the complex back-end calculations involved. After all, in the future it will only be possible to bring new features into series production with high-performance computers. “Autonomous driving is one of the most demanding AI applications of all,” explains Dr. Joachim Schaper, Senior Manager for AI and Big Data at Porsche Engineering. “The algorithms learn from a multitude of examples collected by test vehicles using cameras, radar or other sensors in real traffic.”

Dr. Joachim Schaper, Gerente Senior de IA y Big Data and Porsche Engineer

Conventional data centers are increasingly unable to cope with the growing demands. “Now it takes days to train a single variant of a neural network,” explains Schaper. So, in his opinion, one thing is clear: automakers need new technologies for AI calculations that can help algorithms learn much faster. To achieve this, as many vector matrix multiplications as possible must be executed in parallel in the complex deep neural networks (DNNs), a task that graphics processing units (GPUs) specialize in. Without them, the incredible advances in AI in recent years would not have been possible.

50 times the size of a GPU

However, graphics cards were not originally designed for AI use, but rather to process image data as efficiently as possible. They are increasingly pushed to the limit when it comes to training algorithms for autonomous driving. Therefore, specialized AI hardware is required for even faster calculations. The Californian company Cerebras has presented a possible solution. Its Wafer Scale Engine (WSE) is optimally tailored to the requirements of neural networks by packing as much computing power as possible into one giant computer chip. It is more than 50 times the size of a typical graphics processor and offers room for 850,000 compute cores, more than 100 times more than today’s top GPU.

In addition, Cerebras engineers have networked the computational cores together with high-bandwidth data lines. According to the manufacturer, the Wafer Scale Engine network carries 220 petabits per second. Cerebras has also widened the bottleneck within GPUs: data travels between memory and compute almost 10,000 times faster than on high-performance GPUs, at 20 petabytes per second.

Giant Chip: Cerebras’ Wafer Scale Engine packs massive computing power into a single integrated circuit with a side length of over 20 centimeters.

To save even more time, Cerebras mimics a brain trick. There, neurons work only when they receive signals from other neurons. The many connections that are currently idle do not need any resources. In DNNs, on the other hand, vector matrix multiplication often involves multiplying by the number zero. This costs time unnecessarily. Therefore, the Wafer Scale Engine refrains from doing so. “All zeros are filtered out,” Cerebras writes in its white paper on the WSE. So the chip only performs operations that produce a non-zero result.

One drawback of the chip is its high electrical power requirement of 23 kW and requires water cooling. Therefore, Cerebras has developed its own server enclosure for use in data centers. The Wafer Scale Engine is already being tested in the data centers of some research institutes. Artificial intelligence expert Joachim Schaper thinks the giant chip from California could also accelerate automotive development. “Using this chip, a week’s training could theoretically be reduced to a few hours,” he estimates. “However, the technology has yet to prove this in practical tests.”

light instead of electrons

As unusual as the new chip is, like its conventional predecessors, it also runs on conventional transistors. Companies like Lightelligence and Boston-based Lightmatter want to use the much faster medium of light for AI calculations instead of comparatively slow electronics, and are building optical chips to do so. Thus, DNNs could work “at least several hundred times faster than electronic ones,” the Lightelligence developers write.

“With the Wafer Scale Engine, a week of training could theoretically be reduced to just a few hours.” Dr. Joachim Schaper, Senior Manager for AI and Big Data at Porsche Engineering

To do this, Lightelligence and Lightmatter use the phenomenon of interference. When light waves amplify or cancel each other out, they form a light-dark pattern. If you direct the interference in a certain way, the new pattern corresponds to the vector-matrix multiplication of the old pattern. So light waves can “do math.” To make this practical, the Boston developers etched tiny light guides onto a silicon chip. As in a textile fabric, they are crossed several times. The interference takes place at the junctions. In between, tiny heating elements regulate the refractive index of the light guide, allowing light waves to move past each other. This allows you to control their interference and perform vector-matrix multiplications.

However, Boston businesses are not doing without electronics entirely. They combine their lightweight computers with conventional electronics that store data and perform all calculations except vector and matrix multiplication. These include, for example, nonlinear activation functions that modify the output values ​​of each neuron before moving on to the next layer.

Computing with light: Lightmatter’s Envise chip uses photons instead of electrons to compute neural networks. Input and output data are supplied and received by conventional electronics.

With the combination of optical and digital computing, DNNs can be calculated extremely quickly. “Its main advantage is low latency,” explains Lindsey Hunt, a spokesperson for Lightelligence. For example, this allows the DNN to detect objects in images faster, such as pedestrians and electric scooter users. In autonomous driving, this could lead to quicker reactions in critical situations. “Also, the optical system makes more decisions per watt of electrical power,” said Hunt. That’s especially important as increasing computing power in vehicles increasingly comes at the expense of fuel economy and range.

Lightmatter and Lightelligence solutions can be inserted as modules into mainstream computers to speed up AI calculations, just like graphics cards. In principle, they could also be integrated into vehicles, for example to implement autonomous driving functions. “Our technology is well suited to serve as an inference engine for a self-driving car,” explains Lindsey Hunt. Artificial intelligence expert Schaper has a similar opinion: “If Lightelligence succeeds in building components suitable for automobiles, this could greatly accelerate the introduction of complex AI functions in vehicles.” The technology is now ready for the market: the company is planning its first pilot tests with customers in 2022.

The quantum computer as a turbo AI

Quantum computers are somewhat further removed from practical application. They will also speed up AI calculations because they can process large amounts of data in parallel. To do this, they work with so-called “qubits”. Unlike the classical unit of information, the bit, a qubit can represent the two binary values ​​0 and 1 simultaneously. The two numbers coexist in a state of superposition that is only possible in quantum mechanics.

“The more complicated the patterns, the more difficult it is for conventional computers to distinguish classes.” Heike Riel, director of IBM Research Quantum Europe/Africa

Quantum computers could boost artificial intelligence when it comes to classifying things, for example in traffic. There are many different categories of objects there, including bikes, cars, pedestrians, signs, dry and wet roads. They differ in terms of many properties, which is why experts speak of “pattern recognition in higher-dimensional spaces.”

“The more complicated the patterns, the more difficult it is for conventional computers to distinguish the classes,” explains Heike Riel, who leads IBM’s quantum research in Europe and Africa. That’s because with each dimension, it becomes more expensive to compute the similarity of two objects: How similar are an e-scooter driver and a walker user trying to cross the street? Quantum computers can work efficiently in high-dimensional spaces compared to conventional computers. For certain problems, this property could be useful and result in some problems being solved faster with the help of quantum computers than conventional high-performance computers.

Heike Riel, director of IBM Research Quantum Europe/Africa

IBM researchers have analyzed statistical models that can be trained for data classification. Initial results suggest that cleverly chosen quantum models perform better than conventional methods for certain data sets. Quantum models are easier to train and appear to have higher capacity, allowing them to learn more complicated relationships.

Riel admits that while current quantum computers can be used to test these algorithms, they still don’t have an advantage over conventional computers. However, the development of quantum computers is advancing rapidly. Both the number of qubits and their quality are constantly increasing. Another important factor is speed, measured in Circuit Layer Operations Per Second (CLOPS). This number indicates how many quantum circuits the quantum computer can run at one time. It is one of the three important performance criteria of a quantum computer: scalability, quality, and speed.

In the foreseeable future, it should be possible to demonstrate the superiority of quantum computers for certain applications, that is, that they solve problems faster, more efficiently, and more accurately than a conventional computer. But building a powerful, bug-fixed, general-purpose quantum computer will still take some time. Experts estimate that it will take at least another ten years. But the wait could be worth it. Like optical chips or new architectures for electronic computers, quantum computers could hold the key to future mobility.


When it comes to AI calculations, not only conventional microprocessors, but also graphics chips, are now reaching their limits. Therefore, companies and researchers around the world are working on new solutions. Wafer chips and lightweight computers are close to reality. In a few years, these could be supplemented by quantum computers for particularly demanding calculations.

Smart technology with perfect styling

Duravit’s focus is on comfort and added value for the user

As the world becomes ever more digital, innovative products that enhance the user’s comfort are also finding their way into the bathroom. This is the end product of a customer-focused research and development process that combines technology, functionality, and design in a meaningful way.

Individual mirror control
Gone are the days of users being dazzled by harsh light in front of the mirror. The mirrors from the Happy D.2 Plus and XViu series by sieger design offer practical control via icons on the mirror surface. Contactless operation of the illuminated symbols by “hovering” a finger over the desired icon (infra-red sensor): thanks to the universal symbols, operation is intuitive.

Aesthetically, the icons harmonize perfectly with the overall design of the mirror.
The LED lighting can be individually configured via the touchscreen. With a luminosity of up to 1000 lux, XViu allows continual adjustment of the light color from warm light at 2700 K to cold light at 6500 K. The integrated mirror heating, which keeps the mirror clear of fog at all times, is also controlled using the relevant icon.

The optional dual mirror sets from the Happy D.2 Plus series, also by sieger design, are a perfect blend of aesthetics and technology. Innovative wireless technology allows for synchronized adjustment of the settings for both mirrors. All technologies are subtly integrated within the mirror.

Mirrors from the White Tulip series are available in versions controlled by sensors or an app. The light temperature featuring a memory effect can be synchronized with other lamps connected in the living area and controlled via “Casambi”, an app that has established itself as a reference in smart homes control systems. This feature can be used to dim the mirrors and switch the mirror heating on and off.

Iconic design and hi-tech

The D.1 faucet range saw Duravit team up with Matteo Thun and Antonio Rodriguez to create an iconic design. With the D.1e electronic version, extra state-of-the-art technology has been integrated into the accentuated design for even greater operating comfort. This variant offers the highest comfort and safety thanks to a range of functions. The flat operating button with the integrated LED colored ring is easy and intuitive to operate. A light tap starts or stops the flow of water, and a turn to the left or right alters the water temperature from cold to warm. With the continuous color display from blue to red, the LED ring also provides visual feedback.

Up to three temperature values can be individually stored and directly activated via the “Quick Access” feature. An electronic thermostat integrated into the control unit reacts sensitively to any change to the cold and warm water flow, so that the required temperature is automatically adjusted and kept constant. Additionally, the scald protection is preset at 38 degrees Celsius.

D.1e features a range of presets that can be configured easily and individually, for instance temperature limitation, maximum water flow time, hygiene interval, thermal disinfection, color selection for the ambient lighting on the LED illuminated ring.

Intuitive operation for perfect hygiene
SensoWash® is a by-word for contemporary, gentle, and flawless hygiene. The SensoWash® Starck f shower-toilet is iconic and minimalistic in equal measure. The design of the new shower-toilet generation bears the hallmark of star designer Philippe Starck. His mission: to make the most natural form of cleansing – with water – accessible to a broad public, to subtly integrate technology into the design, and to guarantee the highest level of comfort.

Users’ individual requirements were taken into account during product development – for example, the option to adjust the intensity and position of the spray arm. This function and others are managed using an innovative remote control. While the “Lite” version impresses with its elegantly restrained look, the premium version, SensoWash® Starck f Plus, stands out thanks to its sophisticated, streamlined design. In addition to operation via remote control, on this model, functions such as the water and seat temperature can also be configured using the app. Five different user profiles can be set up. The customer-oriented app enables quick and easy operation and reflects the stylish design of the shower-toilet.

The flush function and odor extraction system may also be controlled via remote control or app if the wall-mounted element, itself perfectly harmonized for SensoWash® Starck f by Duravit, is combined with the A2 electronic actuator plate. An ideal example of how Duravit deploys digital technologies to enhance and maximize comfort.

Duravit AG
Founded in 1817 in Hornberg in the Black Forest, Duravit AG is today a leading international manufacturer of designer bathrooms. The company is active in more than 130 countries worldwide and stands for innovation in the fields of signature design, comfort-enhancing technology and premium quality. In cooperation with an international network of high-profile designers such as Philippe Starck, sieger design, Christian Werner, Cecilie Manz and young talents such as Bertrand Lejoly and Kurt Merki Jr., the company develops unique bathrooms that enhance quality of life for users on a sustained basis. Duravit’s product portfolio comprises sanitary ceramics, bathroom furniture, bathtubs and shower trays, wellness systems, shower-toilets, tap fittings and accessories as well as installation systems.

Nano robots applied to medicine


Nanorobotics is the field of emerging technologies that creates machines or robots whose components are at or close to the nanometer scale (10−9 meters)1. More specifically, nanorobotics refers to the nanotechnological engineering of the design and construction of nanorobots, these devices having a size of about 0.1 to 10 micrometers and built with nanoscale or molecular components. The names of nanobots, nanoids, nanites, nanomachines or nanomites have also been used to describe these devices that are currently in the research and development phase.

Nanometric-scale technology emerged 50 years ago, creating a new dimension called to revolutionize the world we know, as it allows the molecular structure of materials to be manipulated to change their intrinsic properties and obtain others with revolutionary applications. This discipline, which flourished between the 1960s and 1980s, opens up an immense universe of possibilities for contemporary science and industry and presents a booming global market whose value will exceed 125,000 million dollars in the next five years, according to the Global Nanotechnology Market report .

What is nano robotics?

To answer this question, answer:

Samuel Sanchez 

 2014 MIT  Award Winner (Under 35 Innovator)

Documenting myself in, I was impressed by the applications of these machines on a microscopic scale.

Main applications for nano and micro machines

The applications for these devices seem endless and these are, from my point of view, the most interesting:

  • Cancer treatment : they will allow the identification and destruction of cancer cells in a much more effective and accurate way.
  • Mechanisms for the targeted administration of drugs  for the control and prevention of diseases.
  • Image diagnosis:  creation of nanoparticles that gather in certain tissues so that when scanning the body with magnetic resonance systems, problems such as diabetes can be detected.
  • New sensing devices : With almost unlimited customization properties for sensing functions, nanorobotics is going to provide us with incredible sensing capabilities that we can integrate into our systems to monitor and measure everything around us.
  • Information Storage Devices : A bioengineer and geneticist at Harvard’s Wyss Institute has managed to store 5.5 petabits of data (about 700 terabytes) in a single gram of DNA, surpassing by 1,000 times the previous record for data density storable in DNA.
  • New energy systems:  Nanorobotics could play an important role in developing more efficient renewable energy systems or making our current machines more energy efficient, requiring less energy to run at the same level or with the same energy function at a higher level.
  • Meta super strong materials:  A team at Caltech has developed a new material, made on a nano scale, with interlocking struts. As if it were an Eiffel tower, it is one of the strongest and lightest substances ever created.
  • Smart windows and walls:  electrochromic devices that, depending on the potential applied, change color. They are intended to be used for energy efficient smart windows to control the internal temperature of a room, clean themselves and other applications.
  • icrosponges to clean oceans:  a sponge, made of carbon nanotubes, capable of absorbing contaminants (fertilizers, pesticides, pharmaceuticals…) from the water. This project is three times more efficient than other previous initiatives and its study has been published in the journal Nanotechnology, from IOP Publishing.
  • Replicators or “molecular assemblers ”: devices capable of directing chemical reactions, managing to place reactive molecules with atomic precision.
  • Health sensors : they would monitor blood chemistry, notifying parameters out of control, detecting spoiled food, inflammation in the body and much more.

I was even more impressed when talking about the fight against cancer that this technology can achieve and other applications in medicine.

Teorema Virtual Concept Car | Pininfarina

Enjoy the Journey

Pininfarina paves the way to the future through a futuristic and daring Virtual Concept Car, TEOREMA, completely developed by using VR technologies. An all-new interpretation of fully electric, autonomous mobility in the name of user experience and technology, studied to create a sense of community and foster interactions amongst passengers and the outdoor environment.

Pininfarina has always looked to the future using concept cars as an innovation tool to chart the direction and introduce new visions in terms of usability and technology in the automotive industry. TEOREMA, in particular, wants to give people back the pleasure of living the car, driving and travelling, without the frustrations of increased congestion and other compromises, all while integrating AI, 5G and the latest technology to drive passengers towards new incredible experiences along the journey.

Kevin Rice
Chief Creative Officer

Shaping Design with the Language of Experience luxury
Entering the TEOREMA is not much different than entering a living room. As the passenger walks in, the rear opens, the roof extends upwards and forward, and the floor lights up guiding passengers to their seats. The car interiors provide different experiences and moments including areas of privacy where passengers can isolate themselves to sleep or rest.

Autonomous Drive to Reinvent the Car Experience

TEOREMA can easily switch across different driving modes according to passengers’ preferences and different driving situations:

AUTONOMY MODE: the vehicle is completely autonomous so actually needs no driver. In this mode the driver faces the other four passengers, leaving enough distance between each other to give everyone the feeling of having their own private cocoon.

DRIVE MODE: in this mode there is a community feeling and everything that happens in the motion of the vehicle is shared. The different areas of the interiors become of the same color, providing a subconscious connection holding all the occupants to a shared experience.

REST MODE: when the car is in rest mode, the whole interior becomes a social space where people can move to any position they want. The internal environment and the smart seats automatically changes to allow people to socialize or lounge back.


A New Paradigm in User Experience Enabled by Innovative Technologies WayRay for True Augmented Reality

Crisp and vivid virtual images with unprecedented color depth are aligned with the real world and allow passengers to be informed about the relevant traffic information, the places of interest and curiosities. They appear behind the car’s windshield and side glasses. Passengers also have a possibility to interact with the information displayed in order to learn more or share it with other people onboard.

Continental Engineering Services for Smart Surfaces and Intelligent Glass

Continental’s competences on Smart Surfaces and Intelligent Glass provides TEOREMA with important features in terms of both user experience and safety. Pop-up buttons are hidden under the car’s interior surfaces and only emerge when the driver passes his hand over them. Each button has a slightly different shape, allowing the driver to easily recognize them without taking the eyes off the road. The use of Smart Glass in the rear part of the car allows passengers to enjoy their privacy and to regulate the light that enters from the outside, giving then the possibility, also thanks to the foldable flat seats, to create a comfortable cocoon in which to rest.

Poltrona Frau for Seats

The seats were designed together with Poltrona Frau to ensure maximum relaxation and to allow passengers to stretch out and dose off. The seats of TEOREMA are able to fold down flat, turning into a bench or a cot providing for the possibility either to face each other, in a moment of conviviality, or to lie down, during a more intimate time.

BENTELER for the Rolling Chassis

TEOREMA is based on a platform solution built on the BENTELER Electric Drive System (BEDS). This is a very efficient solution and an enabler for setting up new electric vehicles very fast, with reduced complexity and high quality, thanks to its scalable and modular design. With its low construction the Rolling Chassis allows the car to have space on its inside and still keep a relatively low height.

Google Cloud and C3 AI partner to provide industry solutions that will address real-world challenges in financial services, healthcare, manufacturing, supply chain, and telecommunications

REDWOOD CITY, Calif. and SUNNYVALE, Calif.Sept. 1, 2021 /PRNewswire/ — C3 AI and Google Cloud today announced a new, first-of-its-kind partnership to help organizations across multiple industries accelerate their application of artificial intelligence (AI) solutions. Under the agreement, both companies’ global sales teams will co-sell C3 AI’s enterprise AI applications, running on Google Cloud.

The entire portfolio of C3 AI’s Enterprise AI applications, including industry-specific AI Applications, C3 AI Suite®, C3 AI CRM, and C3 AI Ex Machina, are now available on Google Cloud’s global, secure, and low-latency infrastructure, enabling customers to run C3 AI on the industry’s cleanest cloud.

Going forward, C3 AI will also work closely with Google Cloud to ensure that its applications fully leverage the accuracy and scale of multiple Google Cloud products and capabilities, including Google Kubernetes Engine, Google BigQuery, and Vertex AI, helping customers build and deploy ML models more quickly and effectively.

C3 AI’s enterprise AI applications, built on a common foundation of Google Cloud’s infrastructure, AI, machine learning (ML) and data analytics capabilities, will complement and interoperate with Google Cloud’s portfolio of existing and future industry solutions. Customers will be able to deploy combined offerings to solve industry challenges in several verticals, including:

  • Manufacturing: Solutions to improve reliability of assets and fleets with AI-powered predictive maintenance, improve revenue and product forecasting accuracy, and improve the sustainability of manufacturing facilities and operations through optimized energy management.
  • Supply Chain & Logistics: Solutions to help supply-chain reliant businesses understand risks in their supply networks, maximize resilience, and optimize inventory accordingly.
  • Financial Services: Solutions to assist financial services institutions in modernizing their cash management offerings, improve lending processes, and reduce customer churn.
  • Healthcare: Solutions to improve the availability of critical healthcare equipment via AI-powered asset readiness and preventative maintenance.
  • Telecommunications: Solutions to improve network resiliency and overall customer experience, while reducing costs and the carbon footprint of operations.

“Combining the innovation, leadership, scale, and go-to-market expertise of Google Cloud with the substantial business value delivered from C3 AI applications, this partnership will dramatically accelerate the adoption of Enterprise AI applications across all industry segments,” said Thomas M. Siebel, CEO.

“Google Cloud and C3 AI share the vision that artificial intelligence can help businesses address real-world challenges and opportunities across multiple industries,” said Thomas Kurian, CEO at Google Cloud. “We believe that by delivering C3 AI’s applications on Google Cloud, and by partnering to address specific industry use cases with AI, we can help customers benefit more quickly and at greater scale.”

“Organizations across industries are accelerating their digital transformations with cloud-based solutions, purpose-built to deliver specific business outcomes,” said Ritu Jyoti, group vice president, AI and Automation Research at IDC. “This new partnership between C3 AI and Google Cloud represents an acceleration of this trend, as the two companies partner to expand the application of AI-powered solutions in the enterprise.”

“This is fundamentally game-changing for the hyperscale computing market,” said Jim Snabe, former co-CEO, SAP AG. “Google Cloud is changing the competitive discussion from CPU seconds and gigabyte-hours, to enterprise AI applications producing enormous value for customers, shareholders, and society at large.”

Additional resources

About, Inc., Inc. (NYSE:AI) is a leading provider of Enterprise AI software for accelerating digital transformation. C3 AI delivers a family of fully integrated products: C3 AI Suite, an end-to-end platform for developing, deploying, and operating large-scale AI applications; C3 AI Applications, a portfolio of industry-specific SaaS AI applications; C3 AI CRM, a suite of industry-specific AI CRM applications; and C3 AI Ex Machina, a no-code AI solution to meet the needs of citizen data scientists. Learn more at:

About Google Cloud
Google Cloud accelerates organizations’ ability to digitally transform their business with the best infrastructure, platform, industry solutions and expertise. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology – all on the cleanest cloud in the industry. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.


SOURCE Google Cloud

For further information:; or C3 AI Public Relations: Edelman, Lisa Kennedy, 415-914-8336,

Virtual Reality Delivers Immersive, Remote Collaboration for Automotive Design Teams

With HTC VIVE, NVIDIA, and Autodesk in their corner, automotive OEMs are evolving the design process, and shortening the time it takes to go from concept to model.

Collaboration | Design/Visualization

Creating an automobile from scratch is no easy feat. From concepting the look of a new model, to sculpting it from clay, to designing intricate pieces that give the car its personality, and collaborating with teams around the globe; it’s a complex process to say the least. Now, do it in an even shorter timeframe than the other guy, and create a whole lineup with nuanced features to suit an increasingly critical consumer segment.

To better suit the growing demand for global team collaboration and the ability to efficiently develop products, auto manufacturers have turned to virtual reality. Teams from around the world can join with their colleagues in the Autodesk VRED environment to work together on the virtual production of their vehicle models. The combination of Autodesk VRED software, powered by NVIDIA Quadro RTX, and experienced through the VIVE Pro Eye delivers an immersive design experience for many of these top automotive companies.

Hyundai Motors and the Migration to VR-Assisted Design

Hyundai Motors understands the need for a more integrated approach to model development. The many rounds of design feedback with numerous teams that has always been a staple in the automotive development process is now giving way to VR-assisted car design, and Hyundai was quick to adopt it to positively disrupt their process.

In March of this year, Hyundai created the world’s largest VR design evaluation facility, where 20 designers, engineers, or key stakeholders can simultaneously log into a virtual environment to evaluate a car design. The Autodesk VRED powered by NVIDIA Quadro RTX system, in conjunction with VIVE Pro Eye’s foveated rendering provide a cutting-edge experience, allowing for the most minute detail to be reviewed, discussed, approved, or revised.

No stone is left unturned within the VR experience, by just moving the controller teams can modify the virtual car’s color, texture, and parts. From there, teams can also view a high fidelity rendering of the car’s interior, envisioning what the car buyer may actually experience once the car model rolls of the line and into the dealership. To add an additional element of realism, cars can be placed in real world backgrounds, truly illustrating every facet of car design.

Hyundai has embraced VR-assisted car development and are looking to migrate their current process to fully virtual designing in the future.

Hyundai Designers involved in the VR-assisted evaluation process. From left, Senior Researchers Choi KyungWon, Park YoungSoo, and Bae ByoungSang, and Researcher Kang SungMook

BUGATTI’s Virtual Design Process Marries Tradition with Time and Cost Savings

BUGATTI is no stranger to the quality and craftsmanship that goes into creating automotive works of art. The core tradition they uphold with any car design is that it starts with the creative vision of the individual, in which the vision comes alive on the screen. Once a consensus has been reached, the digital model is transferred to a physical model made from rigid foam. With the inclusion of digital design in the creative process, it has helped the product development team cut a quarter of the costs and half of the design time. Thanks to VR and 3D technology, the BUGATTI Divo was designed in six months instead of the year it usually takes.

BUGATTI is visualizing design through VR. On the left it is Head of CAD and Visualization, Ahmet Daggün and on the right is Chief Designer Achim Anscheidt.

Ford Holds Realtime Design Reviews in the Virtual Space

The pandemic wasn’t going to stop Ford from doing what they love to do, designing cars. So, the company issued VR kits to Engineers and Directors across the globe in order for teams to maintain the momentum of designing the next Ford vehicle. But it didn’t stop there.

To ensure everyone had a chance to review designs, Ford created a live virtual design review in a digital airplane hangar, created in both 2D for viewers to experience without VR equipment, as well as the complete 3D models for those in VR. The models being showcased were #TeamFordzilla, Ford’s e-sports team, each car being designed by #TeamFordzilla’s designers.

Utilizing Autodesk VRED with NVIDIA Quadro RTX workstations, the teams were able to create realistic, high fidelity virtual models that could be shown off without a VR headset. But, when paired with the VIVE Pro and VIVE Pro Eye, the experience took the event to another level, as engineers and attendees could really see how different light sources create different reflective patterns on the models.

The live design review session, featuring Moray Callum (VP, Ford Design), Amko Leenarts (Design Director, Ford of Europe), Joel Piaskowski (Ford Global Design Director, Cars and Crossovers), and Kemal Curic (Design Director, Lincoln Motor Company), allowed everyone to experience how the Ford team goes through a design review and to learn what aspects of design are important to them.

The automobile industry is but one of many industries whose complex development process has been made easier, more efficient, and more collaborative thanks to the work of VIVE, Autodesk, and NVIDIA. Ultimately, through the evolution of digital research and design, which leverages virtual reality, advanced design software, and powerful graphics processing; automobile manufacturers will continue to be at the forefront of innovation and better serve the needs of the car buying public.

TESLA – Artificial Intelligence & Autopilot – Tesla Bot

We develop and deploy autonomy at scale in vehicles, robots and more. We believe that an approach based on advanced AI for vision and planning, supported by efficient use of inference hardware, is the only way to achieve a general solution for full self-driving and beyond.


Build silicon chips that power our full self-driving software from the ground up, taking every small architectural and micro-architectural improvement into account while pushing hard to squeeze maximum silicon performance-per-watt. Perform floor-planning, timing and power analyses on the design. Write robust, randomized tests and scoreboards to verify functionality and performance. Implement compilers and drivers to program and communicate with the chip, with a strong focus on performance optimization and power savings. Finally, validate the silicon chip and bring it to mass production.

Neural Networks

Apply cutting-edge research to train deep neural networks on problems ranging from perception to control. Our per-camera networks analyze raw images to perform semantic segmentation, object detection and monocular depth estimation. Our birds-eye-view networks take video from all cameras to output the road layout, static infrastructure and 3D objects directly in the top-down view. Our networks learn from the most complicated and diverse scenarios in the world, iteratively sourced from our fleet of nearly 1M vehicles in real time. A full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train . Together, they output 1,000 distinct tensors (predictions) at each timestep.

Autonomy Algorithms

Develop the core algorithms that drive the car by creating a high-fidelity representation of the world and planning trajectories in that space. In order to train the neural networks to predict such representations, algorithmically create accurate and large-scale ground truth data by combining information from the car’s sensors across space and time. Use state-of-the-art techniques to build a robust planning and decision-making system that operates in complicated real-world situations under uncertainty. Evaluate your algorithms at the scale of the entire Tesla fleet.

Code Foundations

Throughput, latency, correctness and determinism are the main metrics we optimize our code for. Build the Autopilot software foundations up from the lowest levels of the stack, tightly integrating with our custom hardware. Implement super-reliable bootloaders with support for over-the-air updates and bring up customized Linux kernels. Write fast, memory-efficient low-level code to capture high-frequency, high-volume data from our sensors, and to share it with multiple consumer processes— without impacting central memory access latency or starving critical functional code from CPU cycles. Squeeze and pipeline compute across a variety of hardware processing units, distributed across multiple system-on-chips.

Evaluation Infrastructure

Build open- and closed-loop, hardware-in-the-loop evaluation tools and infrastructure at scale, to accelerate the pace of innovation, track performance improvements and prevent regressions. Leverage anonymized characteristic clips from our fleet and integrate them into large suites of test cases. Write code simulating our real-world environment, producing highly realistic graphics and other sensor data that feed our Autopilot software for live debugging or automated testing.

Tesla Bot

Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring. We’re seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.


Here’s Why CES 2022 in Las Vegas Will be for the Vaccinated

As a CEO running a national trade association – the Consumer Technology Association (CTA) – these past 18 months put me on a roller coaster of emotion as I work to lead by example and make decisions that will have positive effects.

In July 2020, I led the decision process to cancel our live, in-person, CES and opt for a digital show instead.  This was a difficult choice because I knew it was going to have a negative effect on a lot of people. First, our staff – we had to downsize and lay off employees.  Second, the City of Las Vegas – they rely on events like CES to bring hundreds of millions of dollars to the city and fuel their local economy.  Canceling our in-person CES affected the hotels and the hospitality workers and the workers who welcome us each year and help us pull off the most influential tech gathering in the world! Third, the industry – tens of thousands each year gather at CES to see the latest innovation, to meet new business partners and to develop new ideas.

As difficult as that decision was, I knew it was the right one because with no vaccine available it simply was not safe to hold CES during a pandemic. Our only defense against COVID-19 at that time was to minimize contact with people, wash our hands and wear face masks. Canceling the in-person CES was the right thing to do – we wanted to do our part and not spread the disease.

Fast forward a year to August 2021. Vaccines are readily available in the United States and several other countries. More and more Americans are now fully vaccinated. However, as vaccines are making their way around the world, so is a new threat – the Delta variant.

We have seen a spike in cases due to the Delta variant, which is severely hurting the unvaccinated population. Yes, there are breakthrough cases for the vaccinated, but many of those have few or no symptoms at all. And of the vaccinated getting the Delta variant, only a tiny percentage are hospitalized.

We prioritize the safety and security of CES participants. Which is why, once again, my team has confronted a major decision: CES will be in person in Las Vegas in January 2022, and we will require all attendees to be fully vaccinated. We are also assessing proof of a positive antibody test as a requirement and will share more details on this later. Importantly, we will continue to follow state and local guidelines and recommendations by the CDC and will announce additional protocols as we get closer to the show.

We all play a role in stopping the spread – requiring proof of vaccination for CES 2022 is one way we can take responsibility on our part.

Many are clamoring to return to the serendipity and relationship-building of in-person events—so are we. CES is where business gets done. It’s an economic engine for our industry and an opportunity for companies from around the world, both large and small, to launch products, build brands and form partnerships. Tech has also evolved by leaps and bounds in the last year and a half—we need to convene and connect so we can maintain our momentum and continue to inspire innovative solutions for a rapidly changing world.

We know our decision to require vaccines—and potentially positive antibody tests—may not be popular for some, but for many others it will allow them to know they can experience CES once again—and get back to business as usual.

For those who cannot attend CES in person, we offer the CES experience through our digital platform and hope to welcome you back to Las Vegas in 2023. Regardless of how you choose to participate in CES 2022, I hope you find inspiration, make new connections, build your business and step into the rest of the year with a renewed sense of hope for how tech continues to improve all our lives.