Jump to content

The AI Thread


RWJC

Recommended Posts

Some cherrypicked lines from an article in todays NYT:


Stanford researchers have ranked 10 major A.I. models on how openly they operate.

 

How much do we know about A.I.? The answer, when it comes to the large language models that firms like OpenAI, Google and Meta have released over the past year: basically nothing.

 

“Three years ago, people were publishing and releasing more details about their models,” Mr. Liang* said. “Now, there’s no information about what these models are, how they’re built and where they’re used.”

 

“As the impact of this technology is going up, the transparency is going down,” said Rishi Bommasani, one of the researchers.

 

We can’t have an A.I. revolution in the dark. We need to see inside the black boxes of A.I., if we’re going to let it transform our lives.


*Percy Liang, who leads Stanford’s Center for Research on Foundation Models

 

I don't want to be the self appointed chicken little of AI but I  remain fearful.  Not of the good, just the bad.   And I am truly impatient for government to step in.  To paraphrase an AI entrepreneur I saw recently on 60 Minutes:  'We can't leave all this in the hand of entrepreneurs'. 

 

(And I fully realize that not all businesses or governments are good at following the rules.)

  • Cheers 2
  • Vintage 1
Link to comment
Share on other sites

12 minutes ago, UnkNuk said:

 

That may have been their interview with Geoffrey Hinton.  It can be viewed here:

 

https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/

It was actually from an episode shown last season.  I just looked on the CBS link you provided and it has no transcription.

 

It was the episode from 4/16/2023 or maybe 7/9/2023 should anyone care to look.

 

Thanks for your link.

Edited by Satchmo
  • Cheers 1
Link to comment
Share on other sites

  • 1 month later...

So I've mentioned my interest in what AI can do for medicine.  I even think the new Beatles record was pretty good.  But I just can't get past this killer robot thing.

 

Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html

Edited by Satchmo
sp
  • Cheers 1
  • Upvote 1
  • ThereItIs 1
Link to comment
Share on other sites

A.I. Belongs to the Capitalists Now

 

What happened at OpenAI over the past five days could be described in many ways: A juicy boardroom drama, a tug of war over one of America’s biggest start-ups, a clash between those who want A.I. to progress faster and those who want to slow it down.

 

But it was, most importantly, a fight between two dueling visions of artificial intelligence.

 

In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential.

 

In another vision, A.I. is something closer to an alien life form — a leviathan being summoned from the mathematical depths of neural networks — that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.

 

With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over.

 

Team Capitalism won. Team Leviathan lost.

 

https://www.nytimes.com/2023/11/22/technology/openai-board-capitalists.html

 

I think I suffer from double vision as I see both the wonders and the horrors.  Proceed with caution and err on the side of it I say.

Link to comment
Share on other sites

On 11/21/2023 at 11:06 AM, Satchmo said:

So I've mentioned my interest in what AI can do for medicine.  I even think the new Beatles record was pretty good.  But I just can't get past this killer robot thing.

 

Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant.

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html

 

This reminds me of something I've been talking about this for years.

 

Nano tech swarms will be the future of war.

A cloud of millions of tiny drones that can get inside of us. A deadly new, fog of war. Full control. 

 

Add Ai to that equation....gulp.

Link to comment
Share on other sites

17 minutes ago, bishopshodan said:

 

This reminds me of something I've been talking about this for years.

 

Nano tech swarms will be the future of war.

A cloud of millions of tiny drones that can get inside of us. A deadly new, fog of war. Full control. 

 

Add Ai to that equation....gulp.

Well such things may be designed by AI but I'm not sure I foresee AI nanobots.  But what do I know?   That's the deal really - nobody, including the experts,  really knows how AI works or what it can do.

 

 

  • Upvote 1
Link to comment
Share on other sites

4 minutes ago, Satchmo said:

Well such things may be designed by AI but I'm not sure I foresee AI nanobots.  But what do I know?   That's the deal really - nobody, including the experts,  really knows how AI works or what it can do.

 

 

 

I'm going back but i think it was Michio Kaku that put these thoughts in my head. Might have been in his book Physics of the Future.

 

Not so much the Ai part but adding that to the mix could be  an ugly development for sure. 

  • Cheers 1
Link to comment
Share on other sites

Putin announces his entry into the race and gives largely cultural reasons for doing so.

 

Putin to boost AI work in Russia to fight a Western monopoly he says is 'unacceptable and dangerous'

https://www.ctvnews.ca/world/putin-to-boost-ai-work-in-russia-to-fight-a-western-monopoly-he-says-is-unacceptable-and-dangerous-1.6659316

 

 

 

 

Link to comment
Share on other sites

  • 1 month later...
On 11/28/2023 at 4:45 PM, 4petesake said:

 

 

The danger of AI & social media.

Powerful 3 minutes.

 

 

 

 

 

or just dont buy into crap... still your facebook and insta should be private unless your an adult you can make your own choices...

 

I decided to be open with all my online alliances... 1 it makes me accountable for my actions... and people can trust that I have dozens of social media with all the same username. i made the choice when we can no use our pin or third factor authentication... its much harder for your identity to be stolen now then 5 years ago...

Link to comment
Share on other sites

  • 4 months later...

openAI definitely jumped the gun with ai when it is clearly not ready to deal with real world interaction with humans. 

 

There is a reason why Google and other big tech was doing these ai research in the background.

 

They are about a decade early and now with openAI releasing mayhem to the world, every company have to rush their development to out be seen as lagging behind. 

Link to comment
Share on other sites

  • 1 month later...

https://thewalrus.ca/the-fastest-way-to-lose-a-court-case-use-chatgpt/

 

The Fastest Way to Lose a Court Case? Use ChatGPT

Burnout and heavy workloads are driving lawyers to AI—and into trouble

BY JULIE SOBOWALEUpdated 16:28, Jul. 18, 2024 | Published 6:30, Jul. 18, 2024
Link to comment
Share on other sites

On 7/19/2024 at 9:55 AM, 6of1_halfdozenofother said:

https://thewalrus.ca/the-fastest-way-to-lose-a-court-case-use-chatgpt/

 

The Fastest Way to Lose a Court Case? Use ChatGPT

Burnout and heavy workloads are driving lawyers to AI—and into trouble

BY JULIE SOBOWALEUpdated 16:28, Jul. 18, 2024 | Published 6:30, Jul. 18, 2024

 

lol you gotta be pretty dumb if you try to win a court case using artificial intelligence

Link to comment
Share on other sites

  • 2 weeks later...

Perhaps not AI, but algorithm.

Just went through the 30 clicks to the left, on my opening 'stories" on MSN-and 29 of them are repeats.

Only new thing was a different add.

 

does the machine think I forgot ALL these stories?

Link to comment
Share on other sites

5 minutes ago, Gurn said:

Perhaps not AI, but algorithm.

Just went through the 30 clicks to the left, on my opening 'stories" on MSN-and 29 of them are repeats.

Only new thing was a different add.

 

does the machine think I forgot ALL these stories?

I hate when I see stories popping up from weeks, months ago. Brutal.

  • Vintage 1
Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...