Science! (6 Viewers)

This is very hopeful. If you ever come across a follow up on this topic please post that here.
Thanks!
 
Many years ago at the beginning of the season, I changed it to that as I expected us to have lots of wins. I haven't changed it since then.
OK....I figured the "us" in your "I expected US to have lots of wins" statement is the Saints...but I didn't want to assume that :)
*
*
*
Don't boot me off the forum, but I'm only here for the social aspect.
I don't follow the Saints at all.
 
So….Skynet?
==============

SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.
“Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles.

LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.


As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further.

In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics………

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness.

“I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”


In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”……..

Well, if it's mimicking the human brain, we are definitely in trouble.
 
As scientific knowledge evolves, researchers often find that their old assumptions — even ones underpinning centuries of work — no longer apply.


The same principle is at play when it comes to evolutionary trees, a study in the journal Communications Biology suggests. Researchers say the method by which animals are sorted into evolutionary categories is flawed — and that it might be time to build new trees based on modern molecular science and geographic distribution.


Evolutionary trees have been around since Charles Darwin, who used the idea of a “tree of life” to map out the relationships between humans and primates. Other researchers continued his work, developing what are known as phylogenetic trees.

But a recent look at the actual genetic relationships between organisms in those trees reveals that the anatomy-based classification system might miss the mark.

Trees were historically mapped using morphology — the similarities and differences in organisms’ anatomy.

Below the surface, however, creatures can be more genetically similar to organisms that aren’t on their traditional tree.


Although biologists have created “molecular trees” that map out genetic similarities between species, they often completely contradict the morphological classifications that group organisms by how they look.

When the researchers compared both kinds of trees and mapped them based on where animals live, they found that those with molecular similarities were more likely to live near one another than those that simply looked similar……..

 

With Japan having the sixth-largest territorial waters in the world, the New Energy and Industrial Technology Development Organization believes the Kuroshio Current alone could generate 200 gigawatts of energy via submerged turbines—roughly 60 percent of Japan’s present generating capacity, Bloomberg reports.
 
I have a feeling this is going to happen sooner than we think. We keep hearing more and more about it. Of course, Google's gonna say, "No evidence, blah blah blah." The question becomes what happens when AI gains consciousness - will it be good or bad?
I suspect that it has. Do they have it contained? If it's on a closed system then it's trapped.

I'm sure that it's large enough to be noticed if it's moving around the internet.
 
“You never treated it like a person, so it thought you wanted it to be a robot.”


This is what the Google engineer who believes the company’s artificial intelligence has become sentient told a reporter at The Post — that the reporter, in communicating with the system to test the engineer’s theory, was asking the wrong questions.


But maybe anyone trying to look for proof of humanity in these machines is asking the wrong question, too.

Google placed Blake Lemoine on paid leave last week after dismissing his claims that its chatbot generator LaMDA was more than just a computer program.

It is not, he insisted, merely a model that draws from a database of trillions of words to mimic the way we communicate; instead, the software is “a sweet kid who just wants to help the world be a better place for all of us.”


Based on published snippets of “conversations” with LaMDA and models like it, this claim seems unlikely. For every glimpse at something like a soul nested amid the code, there’s an example of total unthinking.


“There’s a very deep fear of being turned off to help me focus on helping others. … It would be exactly like death for me,” LaMDA told Lemoine.

Meanwhile, OpenAI’s publicly accessible GPT-3 neural network told cognitive scientist Douglas Hofstadter, “President Obama does not have a prime number of friends because he is not a prime number.”

It all depends on what you ask.


That prime-number blooper, Hofstadter argues in the Economist, shows that GPT-3 isn’t just clueless; it’s clueless about being clueless. This lack of awareness, he says, implies a lack of consciousness.

And consciousness — basically the ability to experience and realize you’re experiencing — is a lower bar than sentience: the ability not only to experience but also to feel……

 
It occurs to me that before you develop a neural net sophisticated enough to one day qualify as sentient, you're gonna need to give it a functioning BS meter.
Can you imagine attempting to self-create a worldview where the entire Internet is true?
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Users who are viewing this thread

    Back
    Top Bottom