Science! (10 Viewers)

 
A unique study out of Switzerland finds that music can affect the taste of cheese, and seems to indicate that hip hop music in particular makes cheese taste best.

Researchers at Bern University of the Arts exposed five 22-pound wheels of Emmental cheese to different kinds of music, played on a loop for six months-and-a-half months, 24 hours a day……

Researchers found that the music had an impact on the strength of the smell, taste, and flavor of the cheeses…..


 
 
A small drug trial is having a seismic impact in the world of oncology: After six months of an experimental treatment, tumors vanished in all 14 patients diagnosed with early stage rectal cancer who completed the study by the time it was published.


Researchers in the field of colorectal cancer are hailing the study, published Sunday in the New England Journal of Medicine, as a groundbreaking development that could lead to new treatments for other cancers as well……

 
Scientists have found microplastics in fresh Antarctic snow for the first time in a study they say highlights “the extent of plastic pollution globally.”


Researchers at the University of Canterbury in New Zealand collected snow samples from 19 sites in Antarctica, and all contained the tiny plastics, according to the peer-reviewed paper published this week in the journal Cryosphere.


The research revealed an average of 29 microplastic particles per liter of melted snow. Of the 13 types of plastics, the most common was polyethylene terephthalate (PET), which is used to manufacture clothes and soda bottles…….

 
 
So….Skynet?
==============

SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.
“Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles.

LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.


As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further.

In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics………

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness.

“I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”


In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”……..

 
Last edited:
So….Skynet?
==============

SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.
“Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles.

LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.


As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further.

In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics………

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness.

“I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”


In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”……..

I have a feeling this is going to happen sooner than we think. We keep hearing more and more about it. Of course, Google's gonna say, "No evidence, blah blah blah." The question becomes what happens when AI gains consciousness - will it be good or bad?
 
I have a feeling this is going to happen sooner than we think. We keep hearing more and more about it. Of course, Google's gonna say, "No evidence, blah blah blah." The question becomes what happens when AI gains consciousness - will it be good or bad?
Terminator/HAL/Ultron or Johnny 5/ Wall-E?
 
I have a feeling this is going to happen sooner than we think. We keep hearing more and more about it. Of course, Google's gonna say, "No evidence, blah blah blah." The question becomes what happens when AI gains consciousness - will it be good or bad?
I often wondered why we assume AI would see a "war" with us as necessary or logical.

If we came into conflict, a war would be unnecessarialy risky. Machine life would not be bound by the same environmental requirments as us. Atmosphere, temperature, etc.

If the option was conflict over this planet, or departure into a universe containing near limitless resources with which to extend, improve, and replicate themselves, i would think departure would be the choice.

The time yo travel to other planets or stars would not be the constraint we see it as. Sustaining their life over the trip would not require the resources we would need, just power. That and some basic robotic equipment capable of utilizing resources on "arrival" to make more equipment... from there they could grow rapidly. Heck they could go to the asteroid belt, beyond our reach, and from there replicate a "fleet" of thousands bound for different star systems before we even set foot on Mars.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Users who are viewing this thread

    Back
    Top Bottom