I'm talking about the Durham report. Are you suggesting that he called out the corruption without evidence? Using money for partisan purposes is very old. It has been going on all my life.
The accusations of corruption were all over the place. The final report showed no criminal behavior. It did show that some procedures/processes should be changed.
I think it's going to cause society to have to do major and rapid adjustments which is never a good thing.
Australian Medical Association calls for national regulations around AI in health care https://www.abc.net.au/news/2023-05-28/ama-calls-for-national-regulations-for-ai-in-health/102381314
Artificial intelligence could drive progress for people. But we must center equity and access. https://www.gatesfoundation.org/ide...utm_campaign=aiprinciples2023&utm_content=GPL
I just watched the first 30 minutes or so of this interview with Mo Gawdat, former Google executive AI developer, and it scared the hell out of me. It's almost 2 hours long, and I intend to watch it in its entirety when I get the time.
AI, it all depends on the humans that initially control it. If it's for nefarious purposes, consider it to be the destruction of humanity.
It doesn't matter what AI is used for, it's who controls it... initially. As they will determine the path AI takes. Be in the best interests of humanity or only for those who control it.
The answer is both. Of course the good things it is used for does not offset the bad. A million good things it does for humanity, for instance, can all be wiped out by a single really bad thing. We cannot allow the power of AI to be disconnected from responsibility, nor can we prevent it.
That's one of the main issues: the leading AI developers are all baldly evil organizations. They are either military/security/intelligence agencies that have zero (0) regard for human rights (and although we have no idea what they are up to in AI, if they are not far ahead of private industry, they are not doing their jobs very well) or private for-profit firms entirely devoted to rent-seeking models of profitability -- and therefore also have no regard for human rights.
Google & Microsoft come to mind. However, both are private industries that work very closely with government agencies. https://www.zdnet.com/article/what-google-does-when-a-government-requests-your-data/ https://www.reuters.com/technology/...models-government-cloud-customers-2023-06-07/
The development of machines to do work done by humans has a long history. Used one way, AI can be considered as another step in the process. Some past inventions have affected a large number of people, some a small number. AI appears to be able to displace vary large numbers of people. It is this which is of immediate concern. Regards, stay safe 'n well.
I assume you're being a bit cheeky here... but ya, the point that 'we made it, we can unmake it' is valid. But the problem is, we can also remake anything we made and then unmade. Even if someone builds an evil AI and we 'switch it off', someone else will build another, eventually, and another. AI is (probably?) subject to evolution just like us. Eventually we wont be able to turn it off. Hopefully, by then, we won't need to...
If AI can be self-learning, it can literally reach out and connect to other sources of energy to power itself, no?