IMO, it's a good or bad as Radio was at that time. TV, Cameras, Computer, internet, Cellular communications. Technology moves forward, for good use and bad use.
Sorry you couldn't put the pieces together, Lets read your own quotes together. "While using Autopilot, it is your responsibility to stay alert, keep your hands on the steering wheel at all times and maintain control of your car" “keep your hands on the steering wheel at all times” “maintain control and responsibility for your vehicle.” and "Autopilot, Enhanced Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment. While these features are designed to become more capable over time, the currently enabled features do not make the vehicle autonomous." Ask a friend to explain it.
You know companies do use disclaimers, don't you, to save their a$$? The English language is not your strong suit. Apologies I don't understand gibberish. You're a friend I wouldn't ask to have my back. This is my argument if you've failed to pick up on it. Tesla misled customers with Autopilot claims, says US regulator Because they used it as a marketing gimmick to solicit sales, people in the videos that you ignored, took it literally. Hence, AI is bullshit in this instance. As you still need to be in control of the vehicle.
Here is a not-so-fun story about AI/algorithms. Robert was wrongly arrested because of a racist algorithm. Are these the hidden dangers of AI? In January 2020, Robert Williams was at work in an auto shop in the US city of Detroit when he received a surreal and disturbing phone call. Social Credit systems similar in China will be fun, not.
Well, they better do that, because those Teslas need a human driver. Feel free to disagree though. LOL. If you go back in this thread, you'll learn that it was I who said it was a marketing gimmick, while you argued the opposite. Enough it enough. Feel free to believe anything you want.
My argument was that AI is bullshit and that Tesla vehicles (most have autopilot, regardless of gimmick or not). Still having trouble with English, hey? I will. Don't you fret about that!
I think the issue here is that FSD is an incredibly hard problem. Besides, Tesla FSD relies on CAMERAS! Who has proof that that is plenty of information for far better FSD??? Yet, you want to claim that Tesla FSD is proof that AI is garbage? Please use your logic.
When Telsa market uses "autopilot" for particular vehicles that use AI when their disclaimers state the driver must still be in control of the vehicle. It's pretty rubbish AI is it not? Maybe someone else should use some logic.
AI will not replace humanity. Humans will use AI extend what we are capable of doing. I would suggest there will be a merging of the biological with technology. I think our future looks more like the Borg rather than the terminator. However, I do see a near term danger if AI is weaponized for making autonomous decisions on the battlefield. The danger is in designing AImto make autonomous decisions without oversight when decision maling is abdicated to AI. An AI must be trained, and the key to understaning the potential for AI is in understanding how an AI is trained. One of the great limitations of AI lies in purpose and morality thatnhuman learn rapidly from childhood. The other big technology that is of more concern to me is in the rapidly evolvjng potential to programatically engineer DNA. See CRISPR research and MIT's Cellotechnology. You can purchse CRISPR kits online that come with bacteria you can play with. CRISPR has the potential for good (curing genetic disorders, curing cancer, etc) but also has potential for bio terrorism, of whis is the research I suspect was going on in the lab where Covid origin ated. DNA
It still looks like you are choosing to judge all AI by one specific product that is claimed to include some level of AI. There are numerous fields where AI is used to great benefit.
It's really amazing that most, including yourself, only consider the bright side of AI. Never considering the negative. So if an AI can 'self-learn' (from the OP) then it can have some largely detrimental effects. What if AI has the latter ability and infiltrates secure facilities such as nuclear launch sites? Launching a few nukes covering its tracks and making it look like it was the other guy. Now you may think it's far-fetched, but if it can self-learn...
No, I DO believe there are risks with AI. I just don't see it related to particular applications of AI that we see today. So far, AI implementations have been given narrow problems and powers. For example, driving, and the gene folding problem in biology: https://www.nature.com/articles/d41586-021-03499-y On the other hand, military use of AI to choose targets and fire on those targets is immediately hugely disturbing even if it goes no farther. And, when did war machines ever go "no farther"? I'm also concerned about AI entering public discourse. For example, a chatbot falsely informed a questioner that a particular Russian journalist is dead. On examining the sources used by the chatbot, the bot used ONE single source: Pravda. What if a popular bot favors using Fox as a source of "truth"?
What if a bot created by Google is biased on what people will get from it? Take a good look at the nefarious protocols this tech giant does and is creating.
I used Fox in my example, because we had a MAJOR court action that involved MASSIVE amounts of evidence that Fox knowingly lied.
You don't need “keep your hands on the steering wheel at all times” when the Tesla "Autopilot", "Enhanced Autopilot" and the "Full Self-Driving Capability" is activated. Tesla still allows the car to "operate" in those modes without the driver’s input of having to keep their hands on the steering wheel at ALL times. And there's the reason why Tesla categorically states for legal reasons that operators "drivers" of their vehicles optioned with "Autopilot", "Enhanced Autopilot" and the "Full Self-Driving Capability" should still remain alert and ready to take full control back from their AI optioned vehicles. My wife and I drove with a friend up to Canada in a fully optioned Tesla where she (the owner-driver) routinely drove the vehicle without both of her hands on the steering wheel while in both Autopilot and the FSD mode; and all she got was a keep your hands on the steering wheel nag before she earned any of the five strikes prior to Tesla computer disabling their features for an extended period. She routinely avoided earning the five strikes within a 30 day period by simply grasping the steering wheel in the allotted steering nag time-frame and was able to continue using FSD while we embarked on our journey. Sorry, so you really don't need "keep your hands on the steering wheel at all times" to engage these variations of their self-driving features.
There is a strong self learning direction that is incredibly important. That's how humans lost the ability to win at chess. In the game of "go", experts at the game can't explain the strategy used by self learning programs that beat humans at that game - a game thought to be impossible for machines to master. But, in these cases the learning is strongly directed. The AI is given the ability to play chess. Then, it plays literally millions of games against itself as winning ideas are culled from the losers. That doesn't lead to sentience. It leads to chess. The same happens in the biology problem of protein folding mentioned above. The AI learns to do that through massive repetition. An AI that learns something about how to drive a car isn't going to do more than drive a car.