It looks like you are using an ad blocker. That's okay. Who doesn't? But without advertising revenue, we can't keep making this site awesome. Click the link below for instructions on disabling adblock.
Welcome to the Newschoolers forums! You may read the forums as a guest, however you must be a registered member to post.
Register to become a member today!
What do you guys think about this topic? Do you believe technology will reach this point in the next 50 years? or never? It's crazy to look at new technologies coming out and start to think if it will ever end. If AI was made would it be beneficial or would it cause harm? Discuss
MiclovinWhat do you guys think about this topic? Do you believe technology will reach this point in the next 50 years? or never? It's crazy to look at new technologies coming out and start to think if it will ever end. If AI was made would it be beneficial or would it cause harm? Discuss
A lot will happen before I die. It scares me... imagine what my kids will use instead of mobile phones in the future...(im only 19)
Most likely I will need to work until im 80 and live until im 100 years old. If that's a good thing? Im cind of back on forth on that one.
AI as in being able to perfectly replicate a human? I think technology will eventually get to that point but it will always be stuck in the uncanny valley. We'll never be able to get over the fact that it's not a real human. Also, whoever is funding the projects will have a huge effect on the behaviour.
**This post was edited on Mar 20th 2017 at 9:23:07am
Sam Harris discusses implications of AI. If you are interested in AI def listen to this guy. One of the leading thinkers on the subject.
He has some great thoughts about not only the AI itself, but what the world would look like around AI. Unemployment, Politics, War are just some of the things that could arise from something like this. If technology growth is exponential then it must come a lot sooner then we think.
I think AI will certainly reach a point where it passes the Turing test. An AI reaching real, authentic sentience, I highly doubt it. I just can't believe that there's a way to program something like the human mind and that it could ever achieve life outside of an organic body
Artificial Intelligence is already around.
I personally think we need to push our technology in other directions though. Like my iPhone is fine, let's improve other aspects of our societies and other products in our day to day lives.
DaveMSeason 2 of Black mirror did a better job of that IMO, more personal and less wealth-based.
I don't think we'll ever reach the robot stage like in the episode where the husband dies. But the whole analyzing speak patterns and being able to talk to a program and it responds like whoever you want seems reasonable.
There will be some giant movement saying that robots have equal rights as humans or something really stupid. Its bound to happen if robots get really similar to humans.
.lenconArtificial Intelligence is already around.
I personally think we need to push our technology in other directions though. Like my iPhone is fine, let's improve other aspects of our societies and other products in our day to day lives.
I think you are talking about general intelligence. Stuff like siri and google maps? AI would be next level and would stir up a bunch of shit. It's a lot to think about the unemployment aspect. If machines could perform human jobs, then why wouldn't business owners stick with that?
MiclovinI think you are talking about general intelligence. Stuff like siri and google maps? AI would be next level and would stir up a bunch of shit. It's a lot to think about the unemployment aspect. If machines could perform human jobs, then why wouldn't business owners stick with that?
Agreed but presumably, AI would invent things we otherwise wouldn't have and could therefore create jobs for humans that didn't exist previously either. Yeah, burgers will no longer be flipped by people. But personally, I think that's like the lowest level application for this type of stuff.
milk_manTrue.. but I don't see AI being able to haul coal to the power plants which power the AI haha
I don't see it being silly enough to use coal. But both of our inability to see how it would solve for anything is itself the very point of all of this. It will do things we never thought of.
MiclovinI think you are talking about general intelligence. Stuff like siri and google maps? AI would be next level and would stir up a bunch of shit. It's a lot to think about the unemployment aspect. If machines could perform human jobs, then why wouldn't business owners stick with that?
Siri is general intelligence, but there are apps/programs that learn and remember. Obviously they aren't advanced enough to be next level AI, but I'm pretty sure they're still considered artificial intelligence.
bdarb207I don't see it being silly enough to use coal. But both of our inability to see how it would solve for anything is itself the very point of all of this. It will do things we never thought of.
Yeh but there's so many details that it wouldn't be able to control.. unless everything becomes computerized and robotic--apart of the internet of "things"--AI wouldn't be able to take over
AI needs electricity, most of our electricity comes from burning coal
milk_manYeh but there's so many details that it wouldn't be able to control.. unless everything becomes computerized and robotic--apart of the internet of "things"--AI wouldn't be able to take over
AI needs electricity, most of our electricity comes from burning coal
People never thought we would be able to put a man on the moon, not even a century later, and we're planning on colonizing Mars. I honestly wouldn't be suprised 30 years down the road, we have technologies and software that can surpass many human functions.
After attending SXSW this year, AI is much closer than I thought. A great use of AI and machine learning will be to perform tasks that people find meticulous or monotonous. Adobe Sensei is a great example. http://www.adobe.com/sensei.html
This is a pretty good showcase of the current state of AI. Does it have all the things you asked for? Probably. But a lot of it is “hallucinated”. The AI doesn’t fundamentally understand a straight air out of a slide opening is undeniable “not gas”. Also, I wouldn’t be caught dead on a pair of SEOS. And a 1:1 aspect ratio fish eye? 2012 Instagram called, they want their hard post back.
Once we get to a point the model has created it’s own K Fed layers and pretzel based feedback loops, I’ll start taking it more seriously.
Sam Harris discusses implications of AI. If you are interested in AI def listen to this guy. One of the leading thinkers on the subject.
The first few comments in this thread and this Ted talk are seriously spooky especially when you got Sam Altman saying AGI is happening this year and already talking about superintelligence as the next step. Power and data storage are major issues but already seeing some groundbreaking research coming out in those areas too. Shit is moving wicked fast and seen too many stories about engineers being shunned or ignored when warning about ai ethics and dangers. Sure companies and CIOs have developed ethics guidelines and shit collectively but is that gonna be enough?
iH8powUntil our stoplights aren’t stupid I refuse to believe AI is any use
the math behind stop lights is actually super interesting, but theres still so many flaws behind midnight reds and things like that, that ai integration could be super cool
iH8powUntil our stoplights aren’t stupid I refuse to believe AI is any use
The energy wasted from letting 1 car stop like half a dozen + at a stop light is incredible. I get so frustrated with suburban traffic lights they're never great.
CalumSKIthe math behind stop lights is actually super interesting, but theres still so many flaws behind midnight reds and things like that, that ai integration could be super cool
Is part of it because cities aren't leveraging predictive analytics to build better systems? Or is the issue collecting the data accurately to make those informed decisions on traffic patterns?
Xgames halfpipe replaced with AI because judges about to off themselves if they see the same robotic dub 14 from the 20 skiers there is a move in the right direction
HypeBeastIs part of it because cities aren't leveraging predictive analytics to build better systems? Or is the issue collecting the data accurately to make those informed decisions on traffic patterns?
pretty sure its a mix. also as population sizes increase as a rapid level, traffic patterns are bound to change. if they used some learning ai system to adapt to the higher traffic levels it would be interesting to see how it would change
stokedelicThis is a pretty good showcase of the current state of AI. Does it have all the things you asked for? Probably. But a lot of it is “hallucinated”. The AI doesn’t fundamentally understand a straight air out of a slide opening is undeniable “not gas”. Also, I wouldn’t be caught dead on a pair of SEOS. And a 1:1 aspect ratio fish eye? 2012 Instagram called, they want their hard post back.
Once we get to a point the model has created it’s own K Fed layers and pretzel based feedback loops, I’ll start taking it more seriously.
As a software engineer that spends a significant amount of time reading papers and building AI systems in a professional setting, I somewhat agree with you here but also disagree to a very large extent.
A general purpose approach is going to have these issues but a finely tuned model will get you significantly closer. It's akin to an off-the-shelf LLM being compared to a cluster of RAG deployments with tuned prompts sitting in front of an LLM gateway using up-to-date models. The more direct you become the greater the benchmark scores rise and the lower hallucinations you have.
The entire goal as of right now, from essentially all leaders in the field, is frictionless systems. If I'm a senior partner at a hedge fund and I want to know the quality of todays ingested data, that currently requires a team of engineers building an ingestion pipeline and building some form of anomaly detection or immutable validation infrastructure. It has to be built, tested, deployed, and maintained. The goal, if they had their way, is that the senior partner in this instance would instead query some agent to test today's data for anomalies. The agent would then unpack the request, build and validate the process, run the data through the pipeline, then return the results before breaking it all down - requiring significantly smaller teams that can scale extensively and removing barriers of entry caused by knowledge of working tools.
That process is what you are starting to see unfold across the board and not only has outstanding utility but also real world impact that is both positive and negative.
Of course there are tools such as v0 by Varcel, as well as all the other off-the-shelf tuned models, but you are starting to see tools such as https://makereal.tldraw.com/ which is simply a prompt generator. If I pass in the following horrible annotated sketch, it will produce the code to generate a web cam app that takes a photo and saves it locally on my machine.
Of course this is an extremely basic example that a child could do if they desired and the mackerel isn't an actual product. But this process of offloading work to AI systems and agents is burning like wildfire through corporations in an attempt to remove the friction of having to be knowledgeable on certain subjects, tools, frameworks, etc. You are seeing the above example working with significantly more complex systems. To use your discussion, if someone asks an agent to edit a photo, it would connect to an agent, build up the tooling using an Adobe API, adjust it to fit the requirements, and produce an output without the user ever having to touch a tool.
What I agree on is there is a ton of overhyped bullshit being pushed around and funded by private equity as everyone wants a piece of the next big thing. But don't let the noise blind you from real utility and growth that is actually happening.
Take the ARC-AGI test for example as it's become a de facto gold standard for benchmarking which tests the models ability to learn new skills on the fly - ie. not looking at a backlog of training data but trying to reason its way into novel solutions. The percentage representing its ability to accurately solve the benchmark. For reference a human with prior knowledge of the subject hits roughly 85% with the test inputs.
Here is the timeline of benchmark scores:
- GPT2 (2019): 0%
- GPT3 (2020): 0%
- GPT4 (2023): ~2%
- GPT40 (2024): ~5%
- GPT O1 preview (2024): ~21%
- GPT O1 High (2024): ~32%
- GPT 01 PRO (2024): ~50%
- GPT 03 Tuned Low (2024): 76%
- GPT 03 Tuned High (2024): 87%
So between 2019 and 2024, we saw the benchmark score rise by 5%. Then in the year 2024 alone, it jumped from 21% to 87%, with GPT 03 Tuned High producing greater accuracy than a human.
The tools you are talking about are likely using one of GPT3 or GPT4. So for that, I agree with you. But when you combine the growth of accuracy and reasoning with the tools mentioned above you start to unpack what's actually happening beyond the bullshit hype. Whenever someone suggests they don't take it serious, I assume they are simply exposed to off-the-shelf tools using shit like GPT3 and 4.
**This post was edited on Jan 27th 2025 at 5:04:53pm
**This post was edited on Jan 27th 2025 at 5:05:49pm
**This post was edited on Jan 27th 2025 at 5:06:14pm