Driverless cars: A bad joke or the future?


If you're expecting traditional car companies to crack self driving cars, then the timeline of 2030 is accurate because they're slow to innovate.

However, it is not going to be a traditional car company that will crack Autonomous Driving, but rather a software focused tech company will solve AD. I still maintain that we will see it within the next 12-24 months. Google's prowess in AI will enable it to beat everyone else to the finish line - how they choose to monetize the service remains to be seen as they are not interested in manufacturing cars.

I think Google will be first too. If there is another company that can do AI or Ad better then them, then their war chest of profits will be used to make an acquisition. They already learned a lot from buying Boston Dynamics and Deep Mind. IBM are are a huge competitor keeping an eye on. However Google has a faster moving culture.

I see two routes to monetization:
1. License the technology to a car manufacturer that that puts the technology in the hands of consumers.
2. Partner with a ride hailing service and making the technology exclusively available through a service.

Traditional car companies won't beat the tech giants to AD. Just look at how voice control is still utterly shite in 99.9% of cars.
 
AI-based software is not enough. Will be when additional sensors will be added, bigger on-site processing power will be more affordable, when 5G mobile networks are in place & operating.

It's not who will be first but who will be able to offer the tech at reasonable & affordable price ... Who will offer most reliable tech ... Who will be able to incorporate the tech into vehicle's design without of current design anomalies (eg. roof dome - sensor hub).

So, it's not all about software (AI) development but also about hardware development (incl. design engineering) & production (eg. modules will have to be designed to fit car design & architecture and not vice versa).

I'm not sure Google has partnered with best automotive partner (FCA).

Also ... AD - especially when offered by ride hailing / ride sharing provider - will have to be error-proof. 100% reliable. Not 90%, not 95%, not 98%. 100% Right now various AI algorithms are rated by accuracy (in %). Nobody will wanted to ride in an AD car with 5% chance of crashing & killing you. Accuracy of AI algorithms can be rated, while human driver accuracy just can't. AD will be much more exposed to first-hand experience by riders / passengers. Any error, choppy ride etc will hurt the image of the provider.

It's just not right to compare AD AI to eg. voice recognition & image recognition & personal preferences AI algorithms. Two different worlds. As said: to asses the situation properly & to react properly the AD AI algorithms will have to be supported by high-res cameras monitoring eg. face movements, behaviour, actions etc of other drivers & traffic participant ... that's still missing. We are getting there but still a lot of work has to be done. At least a decade.
 
AI-based software is not enough. Will be when additional sensors will be added, bigger on-site processing power will be more affordable, when 5G mobile networks are in place & operating.

Local storage and data compression technology will have to drastically improve too as an AD eco system would be vulnerable if it relied on constantly syncing data through 5G. A complimentary system would be direct car to car communication via a private networks.

Lots and lots of problems to solve. All very exciting to follow.

As said: to asses the situation properly & to react properly the AD AI algorithms will have to be supported by high-res cameras monitoring eg. face movements, behaviour, actions etc of other drivers & traffic participant ... that's still missing. We are getting there but still a lot of work has to be done. At least a decade.

Indeed. As I mentioned earlier. Humans have an amazing ability to make decisions when placed in an unfamiliar and unknown situations where variables are unknown. AI beating humans at chess and go is amazing achievements but observing, understanding and conquering new worlds is a greater challenge.

The commercialisation of truly driverless cars is likely to be very phased. e.g start in San Fransico, Silicon Valley and then be rolled out to specific(safe) districts of Tokyo London etc.

I won't be impressed until an AD car is placed in Bombay or Mogadishu and finds it's way through the chaos. :D
 
AI-based software is not enough. Will be when additional sensors will be added, bigger on-site processing power will be more affordable, when 5G mobile networks are in place & operating.

It's not who will be first but who will be able to offer the tech at reasonable & affordable price ... Who will offer most reliable tech ... Who will be able to incorporate the tech into vehicle's design without of current design anomalies (eg. roof dome - sensor hub).

So, it's not all about software (AI) development but also about hardware development (incl. design engineering) & production (eg. modules will have to be designed to fit car design & architecture and not vice versa).

I'm not sure Google has partnered with best automotive partner (FCA).

Also ... AD - especially when offered by ride hailing / ride sharing provider - will have to be error-proof. 100% reliable. Not 90%, not 95%, not 98%. 100% Right now various AI algorithms are rated by accuracy (in %). Nobody will wanted to ride in an AD car with 5% chance of crashing & killing you. Accuracy of AI algorithms can be rated, while human driver accuracy just can't. AD will be much more exposed to first-hand experience by riders / passengers. Any error, choppy ride etc will hurt the image of the provider.

It's just not right to compare AD AI to eg. voice recognition & image recognition & personal preferences AI algorithms. Two different worlds. As said: to asses the situation pro...
You hit the key bottlenecks. I underlined your second sentence.
The key issues are:
1. Sensor reliability and redundancies
2. Operational stability of the sensors.
3. Cost of processing: hardware, software, weight etc
4. reliability of the dependencies in the systems.
5. I doubt if all these people touting AD have actually done a full risk assessment.
This is not like the autopilot system in an airplane. The Hazard and activity matrix makes the risk extremely high, hence to manage the risk, legislation will be required, and massive infrastructure investments will be required. I am sure all western governments have massive budget surpluses(sarcastic). Look at the US that can't even pass a simple infrastructure bill to fix serious infrastructure needs, but are willing to run the deficit to pass tax-cuts.
Europe is no different with regards to budget deficits.
 
If you're wondering why all the carmakers are suddenly scrambling to start their own ride sharing services, have a look at the chart below. Once self driving cars roll out to the masses, the blue line will look like hockey stick.

Screen Shot 2018-07-17 at 12.53.11 PM.webp
 
MERCEDES WILL LAUNCH SELF-DRIVING TAXIS IN CALIFORNIA NEXT YEAR

F015Web_15C226_009.webp

Daimler is planning to bring self-driving Mercedes vehicles to a city in California in 2019. The cars will be S-Class sedans at first, but eventually will look more like the company's F 015 “Luxury in Motion” concept. DAIMLER AG


LIKE IN A Tough Mudder, you've got a few strategies when it comes to the race to launch a taxi-like service with autonomous vehicles. You can start early and keep a slow but steady pace. You can show up a bit late, then try to sprint through it. Or you can hold back, see what trips up other contenders, and then slowly work your way through the obstacles.

The big automakers tend to fall into the third category. They may have taken a few years to recognize that shared autonomous vehicles could annihilate their business model—selling human-driven cars to individual humans—but they're now making real progress toward the finish line. And today, Mercedes-Benz parent company Daimler took a cautious step into the swamp stomp, announcing plans to launch a self-driving car pilot somewhere in Silicon Valley, next year.

Daimler is calling its service an “automated shuttle,” but it's not referring to some blobby, slow-moving van. It’s going to start out using a fleet of S-Class luxury sedans and B-Class hatchbacks, with long-term plans for vehicles designed for autonomous driving, like the F 015 “Luxury in Motion” concept it showed off a few years back.

The automaker is still negotiating the particulars of the deal, has not divulged which city will host this program, and being cagey on details like how many cars will make up the robo-fleet. It does plan to have human safety drivers on board to keep an eye on the system. Passengers, who will request rides via an app, will travel for free. The Germans are more open about the lessons they've learned watching the self-driving car industry start to take shape, including the myriad complexities of the challenge. “Hardly any company can meet this challenge alone,” says Uwe Keller, Daimler's head of autonomous driving.

That's why Daimler is partnering with Bosch, one of the world’s largest automotive suppliers, which has a strong track record in building active safety systems and some of the semi-autonomous systems now on luxury cars. The two companies will together work on the sensors these vehicles use to perceive the world, and the software that makes the actual driving decisions. But that's just part of the problem.

One of the toughest challenges for any autonomous vehicle is coping with the gargantuan pile of data a suite of lidar laser sensors, radars, cameras, and other sensors can produce. A single Bosch stereo camera generates 100 gigabytes of data every 0.62 miles. Bosch and Daimler reckon they’ll need the equivalent computing power of six, highly-advanced, desktop workstations in each car to handle it all, but the space demands and power draw make that an obvious non-starter.

To help there, Daimler will work with Nvida, whose Pegasus AI supercomputer marks its best combination yet of minimal power consumption with maximum performance—try 300 trillion operations every second. Because it can get a bit hot when running at full speed, the companies are planning to integrate water cooling for the computer into an electric car’s battery coolant system.

Nvidia already works with over 370 other automotive partners on autonomous efforts, but says Daimler and Bosch stand out. “They have very detailed, articulated, plans for new classes of vehicles,” says Danny Shapiro, senior director of automotive at Nvidia. And plans for putting those vehicles on the road. Over the past few years, Daimler has been cultivating the various technologies it needs to run a taxi service. It has an Uber-esque app, My Taxi,, which operates in 50 European cities. It has Daimler Fleet Management to, you know, manage the fleet of vehicles, keep them fueled up, and serviced. And last week, it became the first international car builder to get a license to test autonomous driving on the streets of Beijing, China, the world's largest car market.

“We have all the technology you need,” Wilko Stark, who heads autonomous and mobility services for Daimler, told WIRED in April. It has watched the others, now it’s time to get muddy.

Mercedes Will Launch Self-Driving Taxis in California Next Year
 
^^
I have watched a few of Nvidia's recent keynotes and AI and autonomous cars is a BIG focus for them.

Once self driving cars roll out to the masses, the blue line will look like hockey stick.

Screen Shot 2018-07-17 at 12.53.11 PM.webp

Indeed it will. Last month I went into central London around peak time and found Uber to be better value at just £3-4 premium over taking the tube. I couldn't be bothered with getting sweaty on a crowded, having to change train once but also having to walk 5 minutes to reach my destination.

If you are two or three people ride hailing makes even more sense and is going to not only challenge taxis, rental and car ownership but also public transportation.
 
From today's WSJ. :)

Should Artificial Intelligence Copy the Human Brain?

The biggest breakthrough in AI, deep learning, has hit a wall, and a debate is raging about how to get to the next level


B3-BH547_keywor_M_20180802082733.webp


By Christopher Mims - WSJ

Everything we’re injecting artificial intelligence into—self-driving vehicles, robot doctors, the social-credit scores of more than a billion Chinese citizens and more—hinges on a debate about how to make AI do things it can’t, at present. What was once merely an academic concern now has consequence for billions of dollars’ worth of talent and infrastructure and, you know, the future of the human race.

That debate comes down to whether or not the current approaches to building AI are enough. With a few tweaks and the application of enough brute computational force, will the technology we have now be capable of true “intelligence,” in the sense we imagine it exists in an animal or a human?

On one side of this debate are the proponents of “deep learning”—an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity. While far from the only approach to artificial intelligence, it has demonstrated abilities beyond what previous AI tech could accomplish.

The “deep” in “deep learning” refers to the number of layers of artificial neurons in a network of them. As in their biological equivalents, artificial nervous systems with more layers of neurons are capable of more sophisticated kinds of learning.

‘We need to take inspiration from nature.’
—Gary Marcus, NYU

To understand artificial neural networks, picture a bunch of points in space connected to one another like the neurons in our brains. Adjusting the strength of the connections between these points is a rough analog for what happens when a brain learns. The result is a neural wiring diagram, with favorable pathways to desired results, such as correctly identifying an image.

Today’s deep-learning systems don’t resemble our brains. At best, they look like the outer portion of the retina, where a scant few layers of neurons do initial processing of an image.

It’s very unlikely that such a network could be bent to all the tasks our brains are capable of. Because these networks don’t know things about the world the way a truly intelligent creature does, they are brittle and easily confused. In one case, researchers were able to dupe a popular image-recognition algorithm by altering just a single pixel.

Despite its limitations, deep learning powers the gold-standard software in image and voice recognition, machine translation and beating humans at board games. It’s the driving force behind Google’s custom AI chips and the AI cloud service that runs on them, as well asNvidia Corp.’s self-driving car tech.

Andrew Ng, one of the most influential minds in AI and former head of Google Brain and Baidu Inc.’s AI division, has said that with deep learning, a computer should be able to do any mental task that the average human can accomplish in a second or less. Naturally, the computer should be able to do it even faster than a human.

On the other side of this debate are researchers such as Gary Marcus, former head of Uber Technologies Inc.’s AI division and currently a New York University professor, who argues that deep learning is woefully insufficient for accomplishing the sorts of things we’ve been promised. It could never, for instance, be able to usurp all white collar jobs and lead us to a glorious future of fully automated luxury communism.

Dr. Marcus says that to get to “general intelligence”—which requires the ability to reason, learn on one’s own and build mental models of the world—will take more than what today’s AI can achieve.

“That they get a lot of mileage out of [deep learning] doesn’t mean that it’s the right tool for theory of mind or abstract reasoning,” says Dr. Marcus.

To go further with AI, “we need to take inspiration from nature,” say Dr. Marcus. That means coming up with other kinds of artificial neural networks, and in some cases giving them innate, pre-programmed knowledge—like the instincts that all living things are born with.

Many researchers agree with this, and are working to supplement deep-learning systems in order to overcome their limitations, says David Duvenaud, an assistant professor of machine learning at the University of Toronto. One area of intense research is determining how to learn from just a few examples of a phenomenon—instead of the millions that deep-learning systems typically require.

Researchers are also trying to give AI the ability to build mental models of the world, something even babies can accomplish by the end of their first year. Thus, while a deep-learning system that has seen a million school buses might fail the first time it’s shown one that’s upside-down, an AI with a mental model of what constitutes a bus—wheels, a yellow chassis, etc.—would have less trouble recognizing an inverted one.

Supplementing deep learning with other kinds of AI is all well and good, says Thomas Dietterich, former president of the Association for the Advancement of Artificial Intelligence, but it’s important not to lose sight of the magic of deep learning and machine learning in general.

“For machine-learning research, the goal is to see how far we can get computer systems to learn just from data and experience, as opposed to building it in by hand,” says Dr. Dietterich. The problem isn’t that innate knowledge in an AI is bad, he says, humans are bad at knowing what kind of innate knowledge to program into them in the first place.

“In principle we don’t need to look at biology” to figure out how to build future AIs, says Dr. Duvenaud. But the kinds of more sophisticated systems that will succeed deep-learning-focused tech don’t work yet, he says.

Until we figure out how to make our AIs more intelligent and robust, we’re going to have to hand-code into them a great deal of existing human knowledge, says Dr. Marcus. That is, a lot of the “intelligence” in artificial intelligence systems like self-driving software isn’t artificial at all. As much as companies need to train their vehicles on as many miles of real roads as possible, for now, making these systems truly capable will still require inputting a great deal of logic that reflects the decisions made by the engineers who build and test them.

Should Artificial Intelligence Copy the Human Brain?
 
Typical VC funded project. Race to push out a minimal viable product so that the company valuation can be inflated ahead of selling to a big corporation whose job it will be to commercialise the product or turn a profit.
 
pilotstadt-titelbild-w1366xh683-cutout.webp


Bosch and Daimler. Metropolis in California to become a pilot city for automated driving

Bosch and Daimler are speeding up the development of fully-automated and driverless driving (SAE Level 4/5) in the city and are decisively setting the course. The partners have chosen California as the pilot location for the first test fleet. In the second half of 2019, Bosch and Daimler will offer customers a shuttle service with automated vehicles on selected routes in a Californian metropolis.

Daimler Mobility Services is envisaged as the operator of this test fleet and the app-based mobility service. The pilot project will demonstrate how mobility services such as car sharing (car2go), ride-hailing (mytaxi) and multi-modal platforms (moovel) can be intelligently connected to shape the future of mobility. In addition, the partners have decided on the US technology company Nvidia as the supplier of the artificial intelligence platform as part of their control unit network.

For the joint development of a driving system for fully-automated and driverless vehicles, Bosch and Daimler rely on their automotive expertise accumulated over many decades to bring mature and safe innovations to market. Both companies are guided by a shared philosophy.


The decisive factor is to introduce a safe, dependable and mature system. Safety has the highest priority, and is the constant theme of all aspects and development stages on our way to the start of series production. If in doubt, thoroughness comes before speed.
michael-hafner-w110xh110.webp

Michael Hafner -- Head of Automated Driving at Daimler AG


“Developing automated driving to a level ready for series production is like a decathlon”, according to Dr Stephan Hönle, Senior Vice President Business Unit Automated Driving at Robert Bosch GmbH. “It’s not enough to be good in one or two areas. Like us, you have to master all disciplines. Only then will we succeed in bringing automated driving to the roads and the city safely.”

Evaluation of sensor data within milliseconds
A decisive factor for fully-automated and driverless driving in an urban environment is the reliable recognition of the vehicle’s surroundings with the aid of various sensors. Analysing and interpreting the variety of incoming data and translating them into driving commands within a very short time requires enormous computing power – the fully-automated, driverless vehicle will be a mobile super-computer. At the same time, fully-automated, driverless driving in the city requires a versatile, redundant systems architecture and the highest level of functional safety. To achieve this level of safety, the necessary computing operations are performed in parallel in different circuits. This means that the system has instant recourse to these parallel computing results when necessary.

For their driving system, Bosch and Daimler thus rely on a control unit network made up of several individual control units. The US technology company Nvidia supplies the platform required for this, which can run the Artificial Intelligence (AI) algorithms generated by Bosch and Daimler for the vehicle’s movement. The network of control units collates the data from all sensors with radar, video, lidar and ultrasound technology (sensor data fusion), evaluates them within milliseconds and plans the movements of the vehicle. All in all, the control unit network has a computing capacity of hundreds of trillion operations per second. That’s as much as several S Class vehicles together could reach just a few years ago.

pilotstadt-grafik-en-w960xh540-cutout.webp


Metropolis in California will be a pilot city for automated test fleet
The control unit network will also be used in the fleet vehicles which Daimler and Bosch will put on the roads of California in the second half of 2019. Not only that: Both partners will offer customers an automated shuttle service on select routes in a city located in the San Francisco Bay in Silicon Valley. The test operation will provide information about how fully-automated and driverless vehicles can be integrated into a multi-modal transport network. Many cities face numerous challenges that are increasingly burdening the existing transport system. The test is to show how this new technology might be a solution to these challenges.

Driverless driving makes urban mobility more attractive
With their development cooperation on fully-automated and driverless driving in urban environments which began in April 2017, Bosch and Daimler aim to improve the flow of traffic in cities, enhance safety on the road and provide an important building block for the way traffic will work in the future. The technology will, among other things, boost the attraction of car sharing. In addition, it will allow people to make the best possible use of their time in the vehicle, and open up new mobility opportunities for people without a driver’s licence, for example.

The vehicle comes to the driver, not the driver to the vehicle. Within a defined city area, users can conveniently order a car-sharing car or a vehicle that drives by without a driver. The project especially combines the overall vehicle and mobility expertise of one of the world’s leading premium manufacturer with the systems and hardware expertise of one of the world’s largest suppliers. The ensuing synergies’ purpose is to introduce the new technology early and fully validated.

Future mobility

Bosch and Daimler join forces to work on fully automated, driverless system.

case-titel-w300xh300-cutout.webp


Bosch and Daimler employees share the same office space
Bosch and Daimler employees work together in teams in two regions: In the greater Stuttgart area in Germany and around Sunnyvale in Silicon Valley to the south of San Francisco in the USA. Employees from both companies share the same office space. This ensures rapid communication across working disciplines and short decision-making paths. At the same time they have access to the entire know-how of the colleagues in the mother companies. The partners are equally financing the development work.

The personnel in this cooperation are jointly developing the concepts and algorithms for the fully-automated, driverless drive system. Daimler's task is to bring the drive system into the car. To this end, the company is providing the necessary development vehicles, test facilities and later the vehicles for the test fleet. Bosch is responsible for the components (sensors, actuators and control units) specified during the development work. For test purposes the partners use their laboratories and test rigs, plus their respective test sites in Immendingen and Boxberg. Furthermore, since 2014 Mercedes-Benz has approval to test automated vehicles in the Sunnyvale/California region. The company also has comparable approval for the Sindelfingen/Böblingen region since 2016. Bosch was the world’s first automotive supplier to test automated driving on public roads in Germany and in the US in early 2013.

Source: Daimler
 
Daimler and Bosch: San Jose targeted to become the pilot city for an automated on-demand ride-hailing service

Thumbnail?oid=41750871&version=-2&fdRnr=-1&prRnr=-1&thumbnailVersion=3&quality=-1.webp



Test area will be San Carlos/Stevens Creek corridor between downtown and west San Jose
The on-demand ride hailing service app will offer an automated driving experience to a selected user community

Stuttgart/San Jose. Located on the southern shore of San Francisco Bay in Silicon Valley, and with more than 1 million inhabitants, San Jose is the third biggest city in California. It is planned to be the pilot city for trials, targeted to begin during the second half of 2019, of the highly and fully automated driving (SAE Level 4/5) on-demand ride-hailing service recently announced by Daimler and Bosch. The three parties have signed a memorandum of understanding to pursue and finalize this activity. Using automated Mercedes-Benz S-Class vehicles, Daimler and Bosch propose to offer the service to a selected user community in the San Carlos/Stevens Creek corridor between downtown and west San José. With its population expected to grow 40 percent in the next two decades, the metropolitan area faces growing transportation challenges. Moreover, San Jose wants to prepare itself for a future in which autonomous cars hit the streets.

“The pilot project is an opportunity to explore how autonomous vehicles can help us better meet future transportation needs,” says Sam Liccardo, mayor of San Jose. “Since many years we consequently push autonomous driving. With this pilot we will generate valuable insights to connect fully automated vehicles in the best way with users of future mobility services,” says Dr. Michael Hafner, Vice President Drive Technologies and Automated Driving at Daimler AG. “ We have to rethink urban transportation. Automated driving will help us complete the picture of future urban traffic,” says Dr. Stephan Hönle, senior vice president of the Automated Driving business unit at Robert Bosch GmbH.

The on-demand ride-hailing service app operated by Daimler Mobility Services will demonstrate how mobility services such as car sharing (car2go), ride-hailing (mytaxi) and multi-modal platforms (moovel) can be intelligently connected. The test operation will provide information about how highly and fully automated vehicles can be integrated into a multi-modal transportation network. The intent is to provide a seamless digital experience, in which a selected user community will have the opportunity to hail a self-driving car, monitored by a safety driver, from a designated pick-up location and drive automatically to their destination.

Automated vehicles make urban mobility more attractive

With their joint development work on highly and fully automated driving (SAE level 4/5) in urban environments, Daimler and Bosch aim to improve the flow of traffic in cities, enhance road safety, and provide an important building block for the way traffic will work in the future. Among other things, with cars coming to drivers, not the other way around, the technology will boost the attraction of car sharing. Without compromising driving safety, it will allow people to make the best possible use of their time they spend in their vehicles, and open up new mobility opportunities for people without a driver’s licence.

Daimler and Bosch associates share the same office space

Daimler and Bosch associates involved in the development project work together in teams in two regions: in the greater Stuttgart area in Germany and, in the United States, around Sunnyvale in Silicon Valley between San José and of San Francisco. Since they share the same office space, rapid communication across working disciplines is ensured and decision-making paths are short. At the same time they can draw on the combined know-how of their colleagues in the parent companies.

The companies`associates are jointly developing the concepts and algorithms for the highly and fully automated drive system. Daimler's task is to bring the drive system into the car. The company is providing the necessary development vehicles, test facilities, and vehicles for the test fleet. Bosch is responsible for the components specified during the development work, such as sensors, actuators, and control units. For test purposes, the partners use their laboratories and test rigs, plus their respective test sites in Germany. Since obtaining its Autonomous Vehicle Testing Permit from the California Department of Motor Vehicles in 2014, Mercedes-Benz has been testing automated vehicles in the Sunnyvale/California region. And since 2016, it has had similar approval for the greater Stuttgart area in Germany. In early 2013, Bosch was the world’s first automotive supplier to test automated driving (SAE level 3) on public roads in Germany and in the United States.

Source: MB Passion
 
From what I read, I see a lot of companies that are not related to cars, like this one, like google, and like so many others that are fully involved in having their own autonomous system.
Can someone explain to me why ?, what is the monetization they see in this?

I read that it is so important in terms of producing money that it can exceed the sale of cars.
What is the point? Would they sell to the automakers that do not have it as suppliers?

Why is it so important for BMW, Mercedes etc?, to not have to buy it from others? to sell it as an optional extra? they see it as a matter of life or death and I do not understand why it is so much circus.

In other words, systems and technical elements have always been developed, but I've never seen so much interest from companies not related to automobiles, I hope somebody can explain it to me.


AR-190619934.webp


Cooperación estratégica: ZF y el gigante chino de Internet Baidu trabajan juntos para desarrollar una conducción autónoma. (Foto: ZF)
 
From what I read, I see a lot of companies that are not related to cars, like this one, like google, and like so many others that are fully involved in having their own autonomous system.
Can someone explain to me why ?, what is the monetization they see in this?

Specifically for Google, they are the world's largest advertising company. The more google knows about you, the higher their profits.

The self driving car industry is going to give birth to hundereds of new industries. When societies moved from horse & cart transportation to the automobile it gave rise to the petroleum industry, motels, repair shops, truck stops, car dealerships, etc which were non existent until the first car came about. Same thing will happen with self driving cars and the company whose software is the most dominant will likely have major influence and control over the new industries that will spring to life.

More than likely that the first-mover takes all.
 
Specifically for Google, they are the world's largest advertising company. The more google knows about you, the higher their profits.

The self driving car industry is going to give birth to hundereds of new industries. When societies moved from horse & cart transportation to the automobile it gave birth to the petroleum industry, motels, repair shops,...

Thank you, very illustrative, companies are thinking outside the box much beyond what we imagine, as you say will generate countless new businesses that they must have analyzed, precisely because they know things that we do not.
 
Many of the visions of "new mobility" where autonomous self driving cars,aquatic cars or flying cars probably came to them while they were stoned watching the Jetsons.
 
This $500 Lidar System Could Prove Elon Musk Wrong

On Thursday, Luminar announced Iris, a production-ready lidar system with a starting price of just $500 — a huge price slash over the $75,000 lidar sytem produced by leading developer Velodyne.

“With the release of Iris, Luminar uniquely offers a LiDAR solution that meets the performance, safety and cost metrics that OEMs require to commercialize autonomous vehicles,” Ben Kortlang, founding partner at Luminar investor G2VP, said in a press release. “Luminar’s product is the enabling technology that will put autonomous vehicles on the road sooner than we expected.”

 

Trending content


Back
Top