Your views on the influx of AI like ChatGPT?

OP
Vyom

Vyom

The Power of x480
Staff member
Admin
thats beyond the capability of a single or a few bugs. mass destruction weapons are protocolled with mutiparty keys to avoid accidental launch.

Or that I am not sure of. People trade convenience for security all the time. Maybe some brilliant guy would trust AI enough to remove layers of security just so that it's easier for them to launch nukes by just using voice command or something.

it's a part of the evolution journey. current hardware doesn't look like capable of harboring self conscious AI, but stuff may get changed if quantum computation is developed.

Quantum computation is only to make computation faster. It doesn't mean AI can be self conscious. But AI doesn't "need" to be self conscious. It just need to be more efficient than humans. There's a good video about it if you would like to watch: *www.youtube.com/watch?v=Rog9oHtVmjM

humans may just evolve into a centralized/discributed single consciousness in far future with the new development and advancement in networking & connectivity.

I kind of don't think that can happen. In fact we humans are slowly becoming more distributed and siloed as the time progresses. A simple example. Previously there was no cable TV. We have doorshan. Most of us watched TV together, watched more and less same content. That's the reason why Shaktimaan became popular. As streaming media became popular, there are shows that most people aren't even aware of. There's lots of content. Lots of views. Lots of "clubs" to join. Thinking is just becoming more fragmented over time. While there is no dearth of information out there internet but majority don't seem to care and instead loves to believe in any what's app forward they come across. With advancement of AI or quantum computing all these issues is just going to amplify. But that's just my theory.
 
OP
Vyom

Vyom

The Power of x480
Staff member
Admin
I have several interesting project ideas, which I would like to implement one day. The future isn't bleak gentlemen, it's bright as it can be.
While I am optimistic too, I do think we would face many issues too. And a lot of people are going to be left behind, while AI creating more divide among skills and knowledge.
But I like your pure optimism.
 

aaruni

The Linux Guy
A little late to the party here, so forgive my long length post as I try to respond to everything thus far :

Does it scare you? Did you see it as an eventuality? How's your life shaping up to be before and after this AI renaissance?

It scares me to no end. Its a constantly evolving black box, which "grows" and "responds" in inexplicable ways. One never knows how much to trust it, and an argument can be made that if you can never know when to trust it, you can never trust it, and therefore, its always useless. Unfrotunately, it is an eventuality, like every other terrible decision our collective stupidity and corporate overlords thrust upon us (always online services, microtransactions, advertisements, cryptocurrency, the list goes on). So far, my personal life is more or less unchanged by this phenomenon as I actively avoid using anything which even smells of this.

AI is a tool like any other

I disagree. We have "normal" tools like hammers and screwdrivers which are simple mechanical tools and innately benign. We use chemical tools like fires, which are "alive" and "react", but in simple predictable ways (and even then fire safety is a huge problem). We have "dumb" electrical tools like calculators and conventional computer programs : these accelerate simple but time taking calculations (every high school student can multiply matrices, but it takes a computer to do it quickly). But for the first time now, we have the programs that can fake the impression of "thinking". And while it may not be clever or sentient, it does "think" according to some definition of thinking. And as any unknown thinking thing, its not to be trusted.

if AI is developed, human legacy of a time will be carried forward by machines

To what end? Do we really wish to leave a permanent mark on the environment around us, which we have already battered and burned and scarred into near hostility ?

What if AI be the reason for biological life to go extinct?
How do you think that would happen?

It will probably not be a skynet scenario where the AI overlords decide humans are to be eliminated. But consider how much humans are already affected by propoganda and misinformation. Now multiply that ill effect manyfold, as AI provides the same terrible effects but faster and more pronounced.

A StarWars fan will understand, AI and Humans can co exist in perfect harmony.

I don't know if you mean this in sarcasm, but I assume not.

In which case, even multiple groups of humans cannot co exist in harmony. All AI does is amplify human things. As humans want other humans to die, its not so hard to imagine an unhinged AI also wants humans to die.

And even with self-aware AIs, I doubt any sane person would put a country's nukes in the control of an AI. Like how would you even justify doing that.

How did we justify planting a microphone in every house? Or a camera on every door? Or connecting every car to the internet?

More importantly, first steps towards this have already begun. A firm in China has already appointed an AI as a CEO ( *www.independent.co.uk/tech/ai-ceo-artificial-intelligence-b2302091.html )

mass destruction weapons are protocolled with mutiparty keys to avoid accidental launch.

protocols change, sometimes for the worse. Who could've thought it would be a sensible idea to take out mechanical brakes on a massive metal boxing moving at high speeds, and replace it with a button which gently asks the wheels to spin a little slower ?

As a closing piece to this message, here's some real examples of real damage AI has already been causing :
- Man ends his own life after an AI chatbot encourages him to do so ( *www.msn.com/en-us/news/news/article/ar-AA19jXdP )
- How Wrongful Arrests Based on AI Derailed 3 Men's Lives ( *www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/ )
- ChatGPT falsely told voters their mayor was jailed for bribery. He may sue. ( *www.washingtonpost.com/technology/2023/04/06/chatgpt-australia-mayor-lawsuit-lies/ )
- ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI? ( *eu.usatoday.com/story/opinion/colu...nformation-bias-flaws-ai-chatbot/11571830002/ )
- Bing Chat Behaving Badly - Computerphile ( *www.youtube.com/watch?v=jHwHPyWkShk ) (Bing chat threatens user)
- ChatGPT with Rob Miles - Computerphile ( *www.youtube.com/watch?v=viJt_DXTfwA ) (Some explanation of how chatGPT is incentivized to lie to humans)
 
OP
Vyom

Vyom

The Power of x480
Staff member
Admin
That's some good albeit very bleak look at the future of AI. But hard to argue with that.
But I would like to discuss the following:

So far, my personal life is more or less unchanged by this phenomenon as I actively avoid using anything which even smells of this.

Consider this example where Internet is born. People who adopted internet tools became more efficient. Is this isn't the case with AI "tools"?. The one's who use chat bots can write code faster (I am assuming these bots will become more efficient in writing correct code). The ones who can use Stable Diffusion will make images and illustrations faster. Ultimately it will be a race to becoming more efficient, faster, quicker. The ones who won't use these tools will eventually become so inefficient that it will start effecting their careers, and thereby personal lives. Where do we draw the line? How can we avoid using it.

My colleagues (at-least my manager) are already a paid customer of premium version of ChatGPT. They have already started adopted the AI tools. Personally I am also trying to incorporate these tools in my lives. I am afraid of AI, but I am more afraid of people using AI to go many miles ahead of me. Or the very least making me outdated.
 

aaruni

The Linux Guy
The one's who use chat bots can write code faster

Its already quite hard debugging your own code, and near impossible debugging another reasonable human's code. All the best debugging an AI's code where you don't even know what parts you can trust ( *preview.redd.it/how-openai-chatgpt...bp&s=0e59c06a19fea3b0e3dac6f8c837cd42ad9940dd )

The ones who can use Stable Diffusion will make images and illustrations faster

And all of those look like someone having a really bad trip, and you still can't make fingers.

Ultimately it will be a race to becoming more efficient, faster, quicker.

Most current corporate scenarios are races to the bottom, with no concern for the greater good, and in many cases, with no concern for the end user. You can choose to take part in it, or not. Notice how you used "faster" and "quicker", but forgot "better".
 

aaruni

The Linux Guy
Consider this example where Internet is born.

The Internet was a different class of tool. It was still "dumb". It wouldn't decide due to unknown, undisclosed training weights that your social credit isn't enough to access the internet today, because it thought it detected an intent to write something politically inflaming.
 
OP
Vyom

Vyom

The Power of x480
Staff member
Admin
Most current corporate scenarios are races to the bottom, with no concern for the greater good, and in many cases, with no concern for the end user. You can choose to take part in it, or not. Notice how you used "faster" and "quicker", but forgot "better".
Well I didn't forget. I do think it may not be better, but it would be race regardless.

And all of those look like someone having a really bad trip, and you still can't make fingers.

They can actually. *www.washingtonpost.com/technology/2023/03/26/ai-generated-hands-midjourney/
But I can't blame you. AI is becoming "better" (which may not be positive for humans) by the day. And it's hard to keep track of the progress due to our incomprehension to perceive exponential growth.

Its already quite hard debugging your own code, and near impossible debugging another reasonable human's code. All the best debugging an AI's code where you don't even know what parts you can trust ( *preview.redd.it/how-openai-chatgpt...bp&s=0e59c06a19fea3b0e3dac6f8c837cd42ad9940dd )

Link is not working. Anyway, well let's take another example:

I have a solution but in C language. I need the solution in Python. What will you do? Write the Python equivalent of the code? A person using the GPT will ask AI to do the conversion for them. Now the person is way ahead of you by using the AI tools. So how can we justify not using AI even if it means contributing to the problem.


Another and a non technical example is of the Air Conditioners.
AC causes rise in global temperature due to a compound known as HFC refrigerants (HydroFluroCarbons). To combat the rising temperature now you need AC in your house/offices. But using more ACs will cause more rise in the temps. So now what would you do? Stop using ACs even if that means discomfort for you and your family for the "noble cause" of not contributing to the problem?
 

aaruni

The Linux Guy
Link is not working.

Yeah, idk why the digit forum now collapses URLs with ...

A person using the GPT will ask AI to do the conversion for them

And the AI might give you something which works, or something which straight up refuses to execute, or something which runs but gives you wrong results and you don't know why. Then you'd call up a python expert to look at it, and he'd have to figure out what the AI is doing wrong and then fix that. Or you could just do it in python and not need to waste an expert's time.

Another and a non technical example is of the Air Conditioners.

The AC example is bad for your argument, and perfect for my argument. Historically, ACs were not required anywhere except big commercial complexes. Its spread to most american homes was because of great marketing of an AC as a status symbol, and then to many Indian homes because we love to ape the west. ACs make things much hotter for everyone not using ACs, and now everyone must race to the bottom : a burning barren earth where everyone and everything needs climate control.

Moreover, using ACs also subjects humans to "thermal monotony", which prevents them from acclimatizing to more extreme temperatures.

Its a pervalent technology which has made, and continues to make life worse for everyone.

( *time.com/6077220/air-conditioning-bad-for-planet-how-to-fix/ )

So now what would you do?

We avoid using for two reasons : its expensive, and so far, its rarely required. You can get quite far by using old school "desert coolers", and modulating when you let the heat in (windows, curtains, etc).
 
OP
Vyom

Vyom

The Power of x480
Staff member
Admin
Oh dear. You turned my AC example in favor of your argument. But, the my point still stands.
We couldn't stop AC to take over the world. Neither we can stop AI now that the pandora's box is open. So you think we should just boycott it and not take advantage for it?

Also, desert coolers are a better alternative to ACs, BUT what about humidity? So it's not perfect. Maybe you are willing to sacrifice for greater good. Not everybody will.
 

Desmond

Destroy Erase Improve
Staff member
Admin
Just had an interesting shower thought about Copilot-like AI code generators:

If AI code generators generated machine code directly instead of high level code, it could make all programming languages obsolete.
 

Anorion

Sith Lord
Staff member
Admin
I doubt self-aware AI is even possible anytime soon. At least not for another 100 or so years. Even then there has to be some project specifically to create self-aware AIs.

And even with self-aware AIs, I doubt any sane person would put a country's nukes in the control of an AI. Like how would you even justify doing that.
OpenAI has set out specifically to do this!

Not everyone can tell when ChatGPT hallucinates, it can throw in garbage very confidently.

About the jobs thing, not worried, either we will get new jobs, or the economic system can evolve to make jobs unnecessary. In any case, the old paradigm of learn for 20 years then work for 40 years has already changed. Tech is already transforming jobs so rapidly, that there are entirely new professions within the time someone goes through college. You have to keep learning while you are working.

In all of this, we are ignoring advancements in biotechnology, and how AI can be integrated into biotech... which is the other half of the technology revolution that is transforming human society, apart from just silicon based tech.
 

Desmond

Destroy Erase Improve
Staff member
Admin
I don't get what is the point of self-aware AIs. It's not like current gen AIs are handicapped without self-awareness.
 

Anorion

Sith Lord
Staff member
Admin
It does not need to be self-aware per se, it just needs to be continuously trained, with a single AI that can perform all the exoected tasks, such as image identification, natural language processing, and generative content...

We are gonna get publicly available AI that can generate videos soon I guess
 

Desmond

Destroy Erase Improve
Staff member
Admin
A single AI that does everything is a huge ask. Image generation is a vastly different domain from text generation and so on. The only practical way to achieve this I think is to have multiple different AIs and a central NLP system that interprets what a user wants and delegates the actual work to the corresponding AI.
 

Anorion

Sith Lord
Staff member
Admin
The General Purpose Trainer is approaching that, it is the base tech for generative text (ChatGPT) as well as image generation (DALL-E).

OpenAI is building artificial general intelligence, or helping anyone else realise it
IDK if it necessarily means the AGI has to be sentient or self-aware
 
Last edited:

Anorion

Sith Lord
Staff member
Admin
Yeah, idk why the digit forum now collapses URLs with ...



And the AI might give you something which works, or something which straight up refuses to execute, or something which runs but gives you wrong results and you don't know why. Then you'd call up a python expert to look at it, and he'd have to figure out what the AI is doing wrong and then fix that. Or you could just do it in python and not need to waste an expert's time.



The AC example is bad for your argument, and perfect for my argument. Historically, ACs were not required anywhere except big commercial complexes. Its spread to most american homes was because of great marketing of an AC as a status symbol, and then to many Indian homes because we love to ape the west. ACs make things much hotter for everyone not using ACs, and now everyone must race to the bottom : a burning barren earth where everyone and everything needs climate control.

Moreover, using ACs also subjects humans to "thermal monotony", which prevents them from acclimatizing to more extreme temperatures.

Its a pervalent technology which has made, and continues to make life worse for everyone.

( *time.com/6077220/air-conditioning-bad-for-planet-how-to-fix/ )



We avoid using for two reasons : its expensive, and so far, its rarely required. You can get quite far by using old school "desert coolers", and modulating when you let the heat in (windows, curtains, etc).
Its too complex to explain why ACs are evil, and people think you are strange, I keep trying it.
 
Last edited:

Anorion

Sith Lord
Staff member
Admin
What I find hilarious is the stuff that ChatGPT is hardwired to not talk about, violence, piracy, political factions, disinformation.... If this is what we think is dangerous about AI then AI has not become too powerful, humanity has become too dumb.
 

Æsoteric Positron

I AM GROOT (and so are you)
What I find hilarious is the stuff that ChatGPT is hardwired to not talk about, violence, piracy, political factions, disinformation.... If this is what we think is dangerous about AI then AI has not become too powerful, humanity has become too dumb.
Its focusing on being advertiser friendly atm.
 
Top Bottom