Mark Zuckerberg: Well, I'm not an expert in quantum computing. My understanding is that it's still a long way from being a very practical paradigm. I think Google just made some breakthroughs, but I think most people still think that's more than a decade away. So my guess is that we'll have very smart AI before then .
But yeah, I mean, look, I think these things have to be developed very thoughtfully, right? But I don't know; I still think we'd be better off overall in a world where this technology was evenly deployed. You know, it's... I guess there's another analogy that I've thought about: basically every piece of software that everybody uses has bugs and security holes. If you could bc data taiwan go back a few years and know the security vulnerabilities we know about now, you as an individual could basically break into any system.
Artificial intelligence can do the same. It can detect and find vulnerabilities. So, how do we prevent artificial intelligence from getting out of control? I think part of it is the widespread deployment of artificial intelligence. That way, one system’s AI can defend against potentially problematic AI in another system. I think it's like... I don't think it's an AI war. That's not the case. I think it’s just… I don’t know. I think this is...
Kind of like why do we have guns, right? Like, I mean, part of it was for hunting. Part of it is hunting. No and no. There is also a part where people can defend each other. Yes, yes, yes. So it's like virus software. Yeah, I don't think you would want to live in a world where just one person owned all the guns.
Host: I think that's a realistic, pragmatic view because I don't think you can control it right now. I think it's too late now, especially when other countries are also working on it. It's too late now. That's how it is. It's happening. And I think, as you said, guardrails are really, really important.
Yes, what if it is combined with quantum computing?
-
- Posts: 86
- Joined: Thu Jan 02, 2025 8:25 am