WARNING! Artificial Intelligence: pros and cons

There is a Spanish saying: A Guerra avisada, no matan soldado.
It’s more like saying warning on time can prevent a disaster.

Few of the things we should become aware of regarding AI; if we ever encounter a similar situation.

‘I’ve got your daughter’: Mom warns of terrifying AI voice cloning s c a m that faked kidnapping

Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.

AI is the backbone of innovation in modern computing, unlocking value for individuals and businesses. For example, optical character recognition (OCR) uses AI to extract text and data from images and documents, turns unstructured content into business-ready structured data, and unlocks valuable insights.

Artificial intelligence defined

Artificial intelligence is a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.

AI is a broad field that encompasses many different disciplines, including computer science, data analytics and statistics, hardware and software engineering, linguistics, neuroscience, and even philosophy and psychology.

On an operational level for business use, AI is a set of technologies that are based primarily on machine learning and deep learning, used for data analytics, predictions and forecasting, object categorization, natural language processing, recommendations, intelligent data retrieval, and more.

By Cade Metz

Cade Metz writes about artificial intelligence and other emerging technologies.

Published May 1, 2023Updated May 7, 2023

In late March, more than 1,000 technology leaders, researchers and other pundits working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”

The group, which included Elon Musk, Tesla’s chief executive and the owner of Twitter, urged A.I. labs to halt development of their most powerful systems for six months so that they could better understand the dangers behind the technology.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter, which now has over 27,000 signatures, was brief. Its language was broad. And some of the names behind the letter seemed to have a conflicting relationship with A.I. Mr. Musk, for example, is building his own A.I. start-up, and he is one of the primary donors to the organization that wrote the letter
By pinpointing patterns in that text, L.L.M.s learn to generate text on their own, including blog posts, poems and computer programs. They can even carry on a conversation.

This technology can help computer programmers, writers and other workers generate ideas and do things more quickly. But Dr. Bengio and other experts also warned that L.L.M.s can learn unwanted and unexpected behaviors.

These systems can generate untruthful, biased technology/artificial-intelligence-bias and otherwise toxic information.*
Systems like GPT-4 get facts wrong and make up information, a phenomenon called “hallucination.”
Companies are working on these problems. But experts like Dr. Bengio worry that as researchers make these systems more powerful, they will introduce new risks.

The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because A.I. systems often learn unexpected behavior from the vast amounts of data they analyze they could pose serious, unexpected problems.

They worry that as companies plug L.L.M.s into other internet services, these systems could gain unanticipated powers because they could write their own computer code. They say developers will create new risks if they allow powerful A.I. systems to run their own code.

In Puerto Rico, my Island, warning in the news to people about an AI that will call you ask a few questions and will copy your voice to try to use to purchase items and do many other stuff .

New AI-related scams clone voice of a loved one. Here’s what to look out for

Rockford Register Star

May 12, 2023·

You get a call saying a relative was in a car crash and is being held in jail unless you spend hundreds of dollars to get them out. Well, that relative is likely OK and it’s likely you’re being scammed.

It’s part of a new s c a m that the Federal Trade Commission said uses AI, or artificial intelligence to ■■■■ people.

Police in Beloit, Wisconsin recently said a resident was victim of a ■■■■ that likely used AI.

Here’s what you need to know:

  • What does the s c a m look like?

You’ll receive a call that will sound like a friend or family member who will say they’ve been kidnapped or were in a car crash and need money as soon as possible to be released.
*Why does it sound like I know the person when they call?

With the new AI technology, all scammers need is a video or audio clip of someone. Then, using the new technology, they’re able to clone the voice so when they call you, it sounds just like a loved one.

**How can I tell if the phone call is real?

The FTC recommends contacting the person who supposedly called you to verify their story. If the phone call came from a jail, you’re recommended to call the jail. If you can’t get a hold of the person, try reaching out to another family member first.

How else can I tell the call may be fake?

  • Incoming calls come from an outside area code, sometimes from Puerto Rico with area codes (787), (939) and (856).
  • Calls do not come from the alleged kidnapped victim’s phone.
  • Callers go to great lengths to keep you on the phone.
  • Callers prevent you from calling or locating the “kidnapped” victim.

**How will the scammers ask for payment?

They’ll ask for payments that are hard for you to get back, such as a wiring money or gift cards.

1 Like

This is why there should be thorough talk with loved ones about what they’ll actually say when they’re distressed.

The scam is an extension of previous ones. Perhaps the general population is unaware that their loved ones’ voices can be mimicked by A.I.

1 Like

likedeadclowns
Perhaps the general population is unaware that their loved ones’ voices can be mimicked by A.I.

I never knew about this voice phishing going on by AI; until last week that I was watching the news from my Island (PR) and couldn’t believe that this is actually causing terror in my Island. I guess even my family in PR. were not aware of it bc they never mentioned anything to me either, that is, until I alerted them to be extremely careful since I have so many young nieces and nephews, and they love to hang out a lot.

THANK YOU SO MUCH! for sharing the link about phone s c a m in prison. I wasn’t aware of that either. Thankfully, I don’t have relatives in prison, but is good to know prisoners are doing it too.

1 Like

There was an “epidemic” of similar things but not using AI. They did it to my elderly mother twice: “Your daughter has been in a car accident, she’s fine but she has killed a child. There is someone who is willing to take the blame for her and go to prison instead of her, but it will need money.” And there was a woman’s voice almost screaming “Mom, mom, do something”. Of course a high-pitched woman’s voice could be just anyone, so my mother wasn’t sure whether it was me or not. And, being hard of hearing didn’t help. She started panicking saying there’s no money in the bank, already starting to think how she could sell the house etc. As luck would want it, my children were there, and they told her to call me at the office with the other phone (the mobile). They made sure I was fine at work, hung up on these people and everything was okay.
The next time she was prepared and knew better than believe it, but she told me that still her heart was pounding: “What if this time it’s true? Not very likely, but what if?..” However she called me immediately, so…
The police told me that there were 2 or 3 different groups of people doing this, and that they had 50 such complaints in the last month only, usually targeting the elderly.
I mean, some old people could die of a heart attack with things like these!
It seems that AI-generated voices is the next step of that same thing, to make it seem even more realistic. We should warn all the people we know.

4 Likes

Have you people heard of the film Industry wanting to do a cloned AI of the actor/actress and the actors will appear in movies/dramas/shows ONLY once, and the AI will film the rest of the scenes/episodes so the actors will get paid for ONE DAY’s work only. They have million dollar mansions that need maintenance, water, gas bills to pay; not counting the maids, chauffeur etc… HOW will they survive on a one day salary?

@entwyfhasbeenfound Haven’t heard from you in a while. Hope all is well, and you can give your opinion on this AI situation. I always love all the points you bring out for us to read.

Are we facing a discriminating AI? Interesting and somewhat scary article.

Excerpt: yahoo news

What’s happening

The rise of artificial intelligence has led to the creation of generative AI tools, like ChatGPT, that provide automated predictions based on large amounts of data. But as the use of AI reaches new heights, experts say the booming technology increases racial bias and discriminatory practices.*

“Those tools are trained to make predictions based on historical data of what’s happening or has happened before. So automated predictions will mirror and amplify the existing discrimination in the context in which it’s used,” Olga Akselrod, senior staff attorney in the Racial Justice Program at the American Civil Liberties Union, told Yahoo News.

In a 2020 study by Cambridge University, researchers found that AI causes unequal opportunities for marginalized groups. But AI continues to grow despite the inequities. Currently, 35% of companies are now using AI, and 42% are exploring the future adoption of the technology, according to Tech Jury.

“Predictive technologies, such as artificial intelligence, have been implemented in virtually every facet of our day, by both government and private entities, and impact truly critical decisions, such as who gets a job, who gets a loan, who goes to jail, and a host of other decisions,” Akselrod said.

Why there’s debate?

According to experts, AI tools lack transparency and could cause disparities at an unprecedented level.

“Predictive tools pose a great threat to civil rights protections [because] they are used at an incredible scale that is sort of unmatched by individual decisions of the past, or even systemic decisions that weren’t made with the kind of speed and frequency that decisions are made today, using predictive tools,” Akselrod said.

Since racial and economic inequities already exist in society, Akselrod says AI tools will only add to that burden.

“Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like or help screen or build rosters of individuals based on unfair criteria,” Darrell West and John Allen wrote in a report published by the Brookings Institution.

But according to a recent Pew Research poll, more than 50% of Americans think racial bias in workplaces will decline if employers use AI during the hiring process, and that AI will ultimately help fight against discrimination.

MY personal opinion THAT HAS TO BE PROVEN WITH MORE RESEARCH DONE THAN THROWING THIS OUT THERE, AND SAYING; OOPS the AI made a Boo boo and will work on correcting that (centuries later of course…)
I REST MY CASE>>>LOOK
Broderick Turner, director of the technology, race and prejudice lab at Harvard Business school, says AI is not racist, because it is only a tool. “However, depending on the data and rules it is trained on — both created by humans — it can be used in a racist manner,”
(Who doesn’t know that? I Do!) Turner said at an Assembly Talk at Harvard.

What’s next

In July, President Biden announced plans to work alongside seven AI development companies to set guidelines that would create a safe and trustworthy AI system.

“Realizing the promise of AI by managing the risks is going to require new laws, regulations and oversight,” Biden said on July 21. “In the weeks ahead, I’m going to continue to take executive action and help America lead the way to responsible innovation.” He also called on Congress to pass AI legislation.

But Akselrod says the government is playing catch-up. “These more modern tools of discrimination have not yet been met with the regulation legislation and government enforcement that’s needed to protect civil rights and civil liberties,” she said.

Perspectives

AI ‘learns by example’

“AI is just software that learns by example. So, if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes … you’re going to get outputs that resemble that.” — Reid Blackman, author of “Ethical Machines,” on CNN

‘AI has a race problem’

“AI has a race problem. What it tells us is that AI research, development and production is really driven by people that are blind to the impact that race and racism has on shaping not just technological processes, but our lives in general.” — Mutale Nkonde, former journalist and technology policy expert who runs the nonprofit AI for the People, to CBC News

AI creates new roadblocks for marginalized groups

“Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.” — ReNika Moore, director of the Racial Justice Program, ACLU

Solutions are top of mind for experts

“The solution isn’t just to make tech more inclusive, but to root out the algorithms that inherently classify certain demographics as ‘other.’ There is a need for accountability and transparency in AI, as well as diversity in the development of AI systems. Regulatory oversight is also a critical part of this solution. Without these changes, the future could see current racial inequities become increasingly entrenched in our digital infrastructure.” — Meredith Broussard, data journalism professor at New York University, to Yahoo News

AI systems can have gender and racial bias

“We often assume machines are neutral, but they aren’t. My research uncovered large gender and racial bias in AI systems sold by tech giants like IBM, Microsoft, and ■■■■■■. Given the task of guessing the gender of a face, all companies performed substantially better on male faces than female faces. The companies I evaluated had error rates of no more than 1% for lighter-skinned men. For darker-skinned women, the errors soared to 35%.” — Joy Buolamwini, founder of the Algorithmic Justice League, Time

The dangers of AI should be prevented before it’s released to the public

“Fundamentally, we have to have a robust human and civil rights framework for evaluating these technologies. And I think, you know, they shouldn’t be allowed into the marketplace to propagate harm, and then we find out after the fact that they are dangerous, and we have to do the work of trying to recall them.” — Safiya Noble, professor of gender studies and African American studies at UCLA, to NPR

IN MY OPINION…
**This is my OWN two cents on this; I already mentioned how when they use AI to make subtitles the AI only acknowledge HE/HE/He and a SHE (female character) is always a HE throughout the subtitles! We all know where that’s coming from, right?

I have complained so many times, and so far haven’t seen any changes (on the dramas/movies I’ve watched so far). IF the AI can’t recognized a female character from a male character: What makes us think they won’t have racist issues too?

I saw a youtube video of a female AI interview where the female AI stated that AI with time will try to take over and destroy humanity. I don’t know if this was ‘‘fake’’ or real, but its the second video with this statement and it scare the hell out of me because this is not the first time I also saw an interview of a MALE AI version of such videos, and it gives me such a dreadful and uneasy feeling. IN NY AI now they are monitoring the people that evade the train fare, and will get an automatic ticket to the offenders. COPS used to do that too, and now we have AI doing their work too. Are cops going to become not needed also like they want to do with the actors?

AI have no emotions; so we can’t even hope they can learn to love us humans, and after seeing MEGAN and another AI horror movie, I am so terrified of what the future holds for us…:frowning:

4 Likes

From where did they get the voice examples to imitate the voice?

3 Likes

@sonmachinima

The actors/actress have to provide the audio/voice; which will give them a measly pay too.

This situation makes no sense to me at all.

2 Likes

and y’all know someone has to program them? so the AI is using someones ideas, thoughts, voices, and whatevere else
1 Are you Human was the main one for me. but there are many more out there, we innocently watch, even the USA has a bunch. yes to have AI’S rule the world? we do think its farfetched, but is it really! if we don’t be careful , note the article above. note we got the voice activated thing, the finger print, and so much more. we all need to wake up before iot does get too late, oh yeah Blade Runner, Solyent Green just to name a couple more to mention

2 Likes

They wouldn’t “rule” in the sense that they’d have their own will or something, but the fact that they are being used more and more of course will have its effect. They are already being used to make decisions about people’s life and in war. Future cars will in critical situations have to “decide” whether to save the person(s) inside the car or the one crossing the street. And all their decisions and conclusions are based on whatever information they were fed by humans. They might take over the biases of those humans, while they lack empathy on top of that.

3 Likes

mirjam_465
They wouldn’t “rule” in the sense that they’d have their own will or something, but the fact that they are being used more and more of course. Future cars will in critical situations have to “decide” whether to save the person(s) inside the car or the one crossing the street. while they lack empathy on top of that.

The ‘‘humans’’ feeding the information will be the rulers of the world which means their critical decision on who lives and who dies in a car accident situation will be based most likely which one has more money or will be more advantageous to keep alive because it will bring more benefits for the economy. The fact that they can’t be ‘‘fed’’ feeling love or empathy towards the human race is the most scary situation we will all have to face on day. If anyone want to see firsthand the consequences of AI taking over; watch MEGAN. They did their research with that movie and you can see the message ‘‘loud and clear.’’ The movie is so scary I actually had to pause it, and stop watching until my mind was ready to take the next scene.

1 Like

IMHO, AI is . . . Google on steroids. A search tool, and since searching includes writing, also a writing tool. And a promotional gimmick.

Microsoft has “smart” Bing. Opera has Aria, and Google has Bard. Pick me, pick us, you can’t live without the brand of AI we offer; you know you want to do more and think less and buy our products with beezillions in cryptocurrency.

When I was part of the Viki Community on ■■■■■■■, I mean D-i-s-c-o-r-d, I got completely distracted by BlueWillow AI. At the same time, I played around with ChatGPT (YouTube foodies cooked ChatGPT recipes, and I did, too, with mediocre results).

We’ve all had moments of skepticism and hilarity as a result of “Discobot,” the Viki algorithm that routinely censors harmless words and phrases.

As I see it, the reality is that, whatever flavor of AI is under consideration, each one is just a computer program that cannot exceed the intelligence of its programmers.

In that sense, AI will never suddenly “awake” and BWAHAHA!!! rule the world and reduce humanity to an enslaved species. But, as @mirjam_465 points out, it will enable power-hungry, ego-driven people to create a world in which everything is very easy for them and very difficult for “the little guy.”

Haters be hating, dictators be dictating
AI be shutting down freethinking and debating


(Giphy)

5 Likes

@entwyfhasbeenfound

Glad to ‘‘hear’’ from you. Yes, as enslaved species we might become, much, much worse things awaits us, and you can say is just around the corner. Humanity has always lived in denial until the s**t hits them in the face. Let’s just hope we have a savior that will rescue us of the nightmare to come… :cold_face:

A MUST SEE> OPEN YOUR EYES BEFORE IS TOO LATE!!!

READ the intro at beginning, and then jump to THE SERMON at minute 22:06 so you can watch what’s so important about this video. AMAZING…

He’s just being told what he asked for, it would just as much give an answer to “Write Revelations from n to n in the style of William Shakespeare” and it will do so since will most like have the entire bible in it’s dataset as well as the entire works of William Shakespeare, Mark Twain, J.K. Rowling, Issac Azimov, etc.

:roll_eyes::roll_eyes::roll_eyes::roll_eyes:
Before it’s too late of what?
I see these argument about AI all the time. The
'If you think this is amazeballs, just think what it’ll be like in 5-10 years time!" or “AI will superseded human intelligence, they will be the end of humanity” as if there’s any room for Moore’s law doubling to a brave new/dystopian world, letting their imagination run rampant without realising that the shiny new AI is probably already at 80% in it’s leap in advancement and the rest is only going to be small incremental increase.
An analogy would be the first iPhone compared with todays, yeah, it’s faster, has better cameras, more storage (eye wateringly more expensive!) but fundamentally the same with incremental increases.

If he had asked what is the mark of the beast, AI wouldn’t be able to tell him anymore than every opinionated argument on the internet raging today.

If you ask it if there will be a pre trib rapture and it replies ‘No’, but you don’t like that answer and tell ChatGPT it’s wrong, it will apologise that got it wrong and will happily say ‘Yes, there will be one’ with equal certainty because there are just as many factions of Judaism/Christianity that claims there will or won’t be a pre trib rapture, as long as the people at OpenAI didn’t exclude either bit of data it will answer without even knowing what the hell a pre trib rapture is.

All that dystopian FUD about what AI would do about overpopulation is nothing more that a distillation of China’s 1 child policy which was tried in real life and therefore a possible answer, I’d be willing to bet if you pushed it hard enough it would eventually come up with gas chambers because… you know.
Ask AI what it thinks the ideal population is and it’ll probably say 500 million (Ehrlich;s hypothesis) because it doesn’t know better or it could just as easily claim the current 7 Billion because that’s what it is today and we’re all still here, so it’s as much an answer as any.

ninjas_with_onions
Before it’s too late of what?

Not of what…lol. Too late to stop the disaster that awaits us all, and I can say with conviction that is practically around the corner.

It’s already late anyway. The people are ignoring the signs of the terrors to come. I feel so bad forALL the children; all I do is cry. I hope I’m wrong… :sob: :sob: :sob: :sob:

What terror? You’re making it out like it’s some kind of self thinking AI overlord that will erase mankind like in the Terminator movies.

It will definitely be a disruptor, it’s no different than say that of smartphones was a disruptor to the digital camera industry, which was itself a disruptor to photographic film industry.
Or the internet, altered the way we consume media, the way we shop. How we relate to one another on social media that was supposed to connect us but turns out to further isolate us, bringing out the worse side of people as it enabled uninhibited behavior.

Sure, AI will be abused as a tool but that’s irreverent to what we as a species don’t already do to each other.

Like I said before people are only seeing what’s in front of them, but what’s going all over the world that baffles the mind, they are totally oblivious to it. When it happens and I mean, what is definitely coming; you’ll know what I was talking/writing about in here. By then, I don’t know if I’ll be around here in Discussion.