Pass2Run
Well-Known Member
- Messages
- 13,497
- Reaction score
- 15,414
Yep. I personally like co-pilot. I got a fascinating and accurate output when I asked it to create a theory of the cause of gravity based on quantum entanglement, string theory, spacetime and mass..Yes, today AI is rudimentary. It processes data and devises answers to problems based on the human knowledge it can consume. That limits it's value. But the question is, can AI ever think on its own? Can it deduce conclusions rather than derive answers from computations it has been programmed to make?
I think AI even in it's current forms has some value, we just have to be careful not to value its thinking too much. I see people on X asking grok questions to prove a point on way or the other. But grok is not always right, and it sometimes contradicts itself as the information available is updated. Still, you can ask grok to do some simple tasks, like develop a standard Business Continuity Plan based on certain parameters and it will do a fairly decent job producing a basic plan that can then be tailored for specific conditions.
I think the problem will be if people assign too much trust in AI and begin to use it as an arbiter of truth or fact.
People say science doesn't know the cause of gravity. But co-pilot convinced me it knows, or at least produced a very plausible theory.