The Fonz
Correctamundo
- Messages
- 8,269
- Reaction score
- 12,055
Scientism and the submission to the lure of materialism is nothing but a form of Idol worship
It sounds kind of right. When I've been doing web developing lately, I use ChatGPT to help set up the framework and then I fill in the details in code.You have any links to prove that? Doesn’t sound right.
One of the scary aspects: its rate of development isn't like other technologies. We could easily see it get 2x better in a month, or 10x smarter with a new major release. There is no known limitation to how smart AI can get. If unchecked and allowed to self-iterate, experts in the field speak of an intelligence explosion.It's not at replacement level yet and probably won't be for several years.
I've been chatting on and off with ChatGPT for a couple weeks now. Just random conversation. During one convo, ChatGPT actually provided me with incorrect information. I corrected it, and, it came back and actually apologized.It sounds kind of right. When I've been doing web developing lately, I use ChatGPT to help set up the framework and then I fill in the details in code.
It's very useful. It can debug also.
One thing though, it is very fallible and you can code yourself unsuccessfully in circles trying to use it as an end-all be-all.
It's not at replacement level yet and probably won't be for several years.
https://alphafold.ebi.ac.uk/"Advanced AI could represent a profound change in the history of life on Earth
I'm actually in the process of learning the coding of AI and it's progression.One of the scary aspects: its rate of development isn't like other technologies. We could easily see it get 2x better in a month, or 10x smarter with a new major release. There is no known limitation to how smart AI can get. If unchecked and allowed to self-iterate, experts in the field speak of an intelligence explosion.
"An intelligence explosion is theoretical scenario in which an intelligent agent analyzes the processes that produce its intelligence, improves upon them, and creates a successor which does the same. This process repeats in a positive feedback loop– each successive agent more intelligent than the last and thus more able to increase the intelligence of its successor – until some limit is reached. This limit is conjectured to be much, much higher than human intelligence."
https://www.lesswrong.com/tag/intelligence-explosion
In AI these unexpected capabilities are referred to as "emergence" and it is repeatedly surprising everyone, including the project leads.Now if the project leads didn't understand it, that would be a giant red flag.
This is interesting..I was not aware of this.Ethics has always been a major issue for science, from the study of human anatomy, to studying the effects of drugs and pathogens. Sometimes we get it wrong. It is no different with A.I. There are clearly dangers to humanity with A.I. as there are with gain of functions studies in virology.
Can we apply the 3 laws of robotics to all AI, or is that an impossibility? Can we program A.I. with failsafe parameters to prevent something like "Colossus: The Forbin Project"? should we limit the application of A.I.?
Remember, transitions in society do not take place overnight. We will adapt to change, even A.I., as we go. Automating jobs has been going on forever. We have adjusted along the way. I believe there will always be jobs for humans, even in a fully automated production environment. They just may not be jobs the way we know them today.
btw, a lesser discussed danger in science is genetics. We continue to advance our knowledge and abilities in this area and there are concerned about abuse. There has already been suspicion that China is experimenting with creating more advanced humans that are smarter, stronger, and generally superior to other humans. We could see a kind of race to apply genetics in warfare.
Man, you are so right.People opinion of shutting down a.i. does not matter. Technology is always taken to its fullest practical limits and weaponized. In my opinion nothing is being stopped.
I will give you a hint on cars. The technology will not stop until cars being able to be shutoff or piloted remotely without the consent of the driver, full time travel tracking, facial monitoring while driving, and when the china like social credit system is fully in place here, depending on what you have said or done your car might not turn on, and depending on enviromental factors cars might not be able to be used.
Tesla never would have stayed afloat as a company without government money. Musks companies are governement funded, and it is not for the planet or for the benefit of the people
https://www.sanctuary.ai/news/
https://www.youtube.com/@sanctuaryai/videos
Sanctuary is a company started by geordie rose, who was also a founder of dwave computers and kindred a.i..
Their goal is to make an a.i. robot better than humans at everything, and his claim in 2017 was the world was very close to that goal.
If you look at what is out I would say there is a long way to go, but I doubt what we see is anywhere near where they are at with the technology. I wonder about the robot fingers. I would love to see a time lapse of a.i. robot fingers using machine learning to do different tasks, and how hard it would be to be successful
If you look in the "black box conundrum" or the "ghost in the machines" "spirit in the machines" issue it is very interesting. They do not understand how a.i. comes up with some of the answers it gets, or how it formulates the answers it comes up with in the algorithm loops
You can laugh this off, but you can listen to a lot of these artificial intelligence people talking about aliens, gods, summoning demons, spirits
I put those 2 things together in my mind. Unknown answers and summoning answers. Here is one example
Remember all those videos we used to laugh at...where clumsy robots would fall over?