The Entrepreneur Forum | Financial Freedom | Starting a Business | Motivation | Money | Success

Welcome to the only entrepreneur forum dedicated to building life-changing wealth.

Build a Fastlane business. Earn real financial freedom. Join free.

Join over 90,000 entrepreneurs who have rejected the paradigm of mediocrity and said "NO!" to underpaid jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence.

Free registration at the forum removes this block.

I'm having AI anxiety. What is your thoughts on upcoming AI?

Threads with an onging chat or conversation

Olliewhe

New Contributor
User Power
Value/Post Ratio
243%
Apr 19, 2023
7
17
Hello, I am interested in hearing your thoughts on upcoming AI tools. Initially, they were incredible, but now they are causing anxiety. Do you share this feeling, or am I alone in this? The ongoing bettle between Altman and Musk regarding AI, as well as the six-month pause on all AI Labs, have added to my concerns. Furthermore, it appears that some jobs, such as content writing, coding, and graphic design, are already being taken over by AI to some extent. All of this is quite alarming.
Use Ai to your advantage my business uses ai and I’ve absolutely destroyed my competition with it.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,727
69,160
Ireland
I'm pretty sure retail AI is nowhere near as powerful as military AI and that there's an arms race to have the best AI so we don't get killed by those with better AI.
 

Gabo96

New Contributor
User Power
Value/Post Ratio
129%
Feb 14, 2019
7
9
Fearmongering nonsense.

A, GPT-4 isn't nearly as good as the media hype makes it out to be. It's a cool, effective, useful tool, but it's not replacing anybody yet. If it's passing standardized tests, it's because those tests were never particularly effective at what they were supposed to be doing.

B, what exactly are you concerned this thing is going to do to you? Even the stupid paperclip thing requires the fantasy that not only can you be magically converted into paperclip materials, but also that something powered by coal and confined to a hard drive can physically force you into the magic paperclip material conversion machine.

Realistically? Eventually, AI will obsolete some jobs. Just like you don't see a lot of travel agents around anymore. Where'd they all go? Are they paperclips now, or did they just find different jobs? So there'll be some economic readjustment, but that's what always happens when things change.

Finally, there is absolutely no way to put the genie back into the bottle. Technology marches in one direction, relentlessly, and no amount of big forum text is going to change that. Demanding that everyone acknowledge your opinions and then jam their heads in the sand about the reality of the world isn't going to help anyone.
My goal isn't to scare people just for the sake of traumatizing or depressing people. I'm sorry if thinking that kind of doomsday scenario makes people feel uncomfortable. But hey, if the ship is sinking, does it matter if you depress people or do you prioritize saving the ship from disaster?

Now, the question is whether the ship is actually sinking or not. Or rather, how big is the chance of a bad outcome?

You might disagree with me and believe the risk is non-existent or very far off in the future, but that doesn't mean my position is nonsense. Can you at least concede that?

George Hinton recently quit his job at Google so he could openly talk about his fears about AI.
1. We're talking about an important figure in the field. Meaning, he's credible and respected by his colleagues.
2. He's afraid of machines becoming smarter than us, not just job loss or possible malicious use of AI by humans (which it's almost universally agreed upon real near future concern, and enough to raise an alarm).

The median AI researcher, according to this survey, thinks there's a 10% chance of humans failing to control AI:

So can we agree that the thought of a very bad outcome to the current AI arms race isn't far out of left field?

Responding to your points:

1. GPT 4 isn't impressive? Come on. GPT 3.5 is already pretty impressive. ChatGPT is able to write code, poetry, sales letter, legal text, and fiction better than most humans and faster than every human. It's extremely impressive.

Ok, are standardized tests not effective to measure how good humans are in the professions they were designed to measure? Fair enough, but what's breathtaking is that GPT 3.5 scored in the bottom 10% percentile and GPT4 in the top 90%, in some tests. The rate of improvement is significant. As far as I'm concerned, LLM's started being a thing in 2018, they were laughable in 2020, and somehow in 2022 they made a big leap forward.

The same pattern has repeated in other domains/applications. It went from being dumber than most humans to smarter than most humans to superhuman really fast. It also developed emergent capabilities that we didn't expect and don't yet fully understand.

It also seems we're currently moving the goalpost of what constitutes something "truly intelligent", and we're definitely past the point of what we might have considered intelligence in a machine 50 years ago (not talking about sentience or consciousness, that's not the point).
What's the requirement or measure for you to consider AI truly/generally intelligent?

The last frontier is AI being capable of scientific thought (ie. if it had all the information up to the start of the XXth century, it would come up with the theory of relativity). At which point it's already too late, which brings me to the next point:

2. The risk is AI becoming smarter than humans and capable of programming itself. If it's capable of recursively improving itself, it can become superintelligent in a very short amount of time. As smarter than humans as we are compared to a frog, maybe more. At that point, do you really think we would be able to control something way smarter than us? Intelligence is power. Whatever way we think to control it, it will probably fail/AI will find a way around it/it will simply work in unintended ways.

Saying that AI is confined to a hard drive is like saying that we're confined to our brains. If AI has access to the internet, it has access to a lot of resources, electronic devices, and people.

It's not necessarily that AI is 'evil' and wants humanity destroyed, it's just that it doesn't care, it doesn't have any morality. It just optimizes for some outcome that we don't understand and doesn't particularly care about the wellbeing of humanity, and will likely use resources that are vital for our survival.

We have empathy because we evolved in an ancestral environment where mutual cooperation was essential to the survival of our genes. Creating a "Friendly AI" is the challenge of instilling morality into something completely alien that didn't 'evolve' in the same environment as us, doesn't feel or think like us, but can appear as it does. What can appear as AI showing empathy or moral sentiments is really AI mimicking human generated text.

Sadly, I agree with your last paragraph, there's no puting the genie out of the bottle, or at least it's very hard to do. But thinking it's impossible to do so is part of the problem. If everyone thinks AI risk is nonsense and everyone else will think that it's impossible to stop the arms race, everyone will keep scaling AI. There's incentive to keep going (losing against competition, being attacked by other countries, etc) and no incentive to stop, since AI risk is fearmongering nonsense. If more people start to believe that AI risk makes sense, more people will have less of an incentive to keep going, AI might kill you even if you win, and other people are starting to believe the some so they're going to stop to.

So, we can try. Or at least we can delay it. Even if it's inevitable, I'd rather have AI doom in 15 years and not 5 years.

I'm a very optimistic person, hope I made my points clear and this doesn't come off and doom nonsense.
 

James Klymus

Gold Contributor
FASTLANE INSIDER
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
353%
Dec 28, 2018
474
1,672
28
Chicago, Illinois
Hello, I am interested in hearing your thoughts on upcoming AI tools. Initially, they were incredible, but now they are causing anxiety. Do you share this feeling, or am I alone in this? The ongoing bettle between Altman and Musk regarding AI, as well as the six-month pause on all AI Labs, have added to my concerns. Furthermore, it appears that some jobs, such as content writing, coding, and graphic design, are already being taken over by AI to some extent. All of this is quite alarming.
Stay off the internet. Seriously. Reading and watching all of this content about how AI will end the world is not productive to your career goals OR mental health.

We humans gravitate to shocking things, especially when they’re negative. We now live in a world where everyone is fighting for your attention, and in order to win you have to make content more and more shocking.

Quit consuming this stuff, it’s doing you no good. Stay off of YouTube, instagram, TikTok and news sites.

And if you don’t want to listen to this advice, do this instead. If AI interests you so much, Do ALL the research you possibly can on AI and start a YouTube channel/social media account talking about AI. If it interests you, it sure as hell will interest others. At least you will have shifted from consumer to producer.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

9ofPentacles

New Contributor
User Power
Value/Post Ratio
267%
May 2, 2023
3
8
Hello, I am interested in hearing your thoughts on upcoming AI tools. Initially, they were incredible, but now they are causing anxiety. Do you share this feeling, or am I alone in this? The ongoing bettle between Altman and Musk regarding AI, as well as the six-month pause on all AI Labs, have added to my concerns. Furthermore, it appears that some jobs, such as content writing, coding, and graphic design, are already being taken over by AI to some extent. All of this is quite alarming.

When it comes to the application of AI, the sky is the limit. Yet the flip side of the coin is, according to a study by the University of Pennsylvania, it's going to replace a lot of jobs sooner than you can imagine, including decent-paying jobs like accountant, web designer, etc (the study provided a list). The worst-case scenario is a domino effect of job loss - when the first batch of people got laid off, their spending power plummets, which means less demand on the market for products and services, and this further leads to a second batch of redundancy, and then the third batch (even though the second ad third batch is probably not directly affected by AI but they are affected by the reduction of market/shopping demand of the first batch)...

The difference between the upcoming AI revolution and the industrial revolution is that the latter afflicted mostly low-income earners and they can switch jobs from, say private shoemakers to shoe factory workers. But what can an accountant do when he can no longer do accounting? (no offense toward accountants, just making a point here).

The good news is, if you look at the list of jobs AI will be replacing in the future - you identify the industries where AI is needed and there lies potential and opportunities - invent something leveraging AI for that industry and you'll be rich - easily said than done, I know.
 

MarcusAurelius

Always be kind. No matter what.
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
97%
May 8, 2023
38
37
Italy
Hi,

I’m new here.

I’ve also thought about all the implications an AI can bring to our lives. Personal and professional. Positive and negative implications.

I believe, in my personal opinion, that anything can be viewed from different points of view.

From a purely rational point of view, it is clear that AI will replace all those professionals who simply copy/paste content. They just input data.

Probably, AI will make us more "human" and less dependent, at least mentally, on computers and accessories.

If you think about it, AI can help human intervention, but it can never replace it.

Qualities such as compassion, intuition and empathy will remain advantages of human beings.

An AI can never replace the sensations that human contact gives you. It’s not possible.

The proof?

They created Pi, a GPT chat that would like to be a kind of "psychologist" for the user who interacts with it. It asks how you are, and it entertains a conversation. But it cannot be compared to a human being.

And, those who have tried it, have clearly warned that the questions and the sentences were of circumstance. Sentences made, taken from preloaded situations. But phrases that may not have the desired effect.

But most of all, AI can’t handle silence. That’s a fundamental thing in any human interaction.

Whether it is a therapy, whether it is a sale, or a dialogue with a friend or a potential partner.

Ultimately, I believe and hope that AI will make us come back to understand how important real human contact is, rather than continuing to interact via screen and simply writing messages.

Human interaction is, by nature, a two-man game. A "tennis game" where you pass the ball. A "dance" where you accompany each other, towards a common solution.

This, AI, can never replace it. Because it cannot handle these situations.

It will probably replace copywriters. Why? Because selling on paper, it’s like you and just you, all the time, until the customer buys.

There’s no interaction. It’s a one-way dialogue.

It’s a tennis game where you keep throwing balls until your opponent declares defeat.

It is a dance by yourself, to convince your interlocutor that what you do is good.

Still remaining in the realm of copy, probably there will no longer exist mileage newsletters or 50-page sales letters.

Probably there will be a need for interaction, to create more complex ecosystems but that allow to "interact" with the customer, that give him the opportunity to choose what to do and when to do it.

This applies in the copy as in any other sector: Can an AI replace a medical procedure? Probably yes.
Can you communicate to someone who has cancer and needs emergency surgery? Absolutely not.

Because she lacks the characteristics that make us human.

It will be crucial to learn how to ask the right questions. Which AI cannot do at the moment.

We must know how to exploit this invaluable advantage. We are human and we will always be. Within our limits, of course, but also for our endless possibilities.
 

Johnny boy

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Speedway Pass
User Power
Value/Post Ratio
634%
May 9, 2017
3,022
19,169
27
Washington State
I want to create an AI therapist and have a landing page for a therapist service that caters to people with AI anxiety so they can signup and talk to an AI therapist about it and it charges them a monthly fee so I can make money off of irony.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

MarcusAurelius

Always be kind. No matter what.
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
97%
May 8, 2023
38
37
Italy
I want to create an AI therapist and have a landing page for a therapist service that caters to people with AI anxiety so they can signup and talk to an AI therapist about it and it charges them a monthly fee so I can make money off of irony.
It already exist: Pi, your personal AI

it seems that Pi has basics replies and can't handle silence (essential in therapy)
 

Subsonic

How you do anything is how you do everything
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
282%
Aug 16, 2022
965
2,725
19
Germany
My goal isn't to scare people just for the sake of traumatizing or depressing people. I'm sorry if thinking that kind of doomsday scenario makes people feel uncomfortable. But hey, if the ship is sinking, does it matter if you depress people or do you prioritize saving the ship from disaster?

Now, the question is whether the ship is actually sinking or not. Or rather, how big is the chance of a bad outcome?

You might disagree with me and believe the risk is non-existent or very far off in the future, but that doesn't mean my position is nonsense. Can you at least concede that?

George Hinton recently quit his job at Google so he could openly talk about his fears about AI.
1. We're talking about an important figure in the field. Meaning, he's credible and respected by his colleagues.
2. He's afraid of machines becoming smarter than us, not just job loss or possible malicious use of AI by humans (which it's almost universally agreed upon real near future concern, and enough to raise an alarm).

The median AI researcher, according to this survey, thinks there's a 10% chance of humans failing to control AI:

So can we agree that the thought of a very bad outcome to the current AI arms race isn't far out of left field?

Responding to your points:

1. GPT 4 isn't impressive? Come on. GPT 3.5 is already pretty impressive. ChatGPT is able to write code, poetry, sales letter, legal text, and fiction better than most humans and faster than every human. It's extremely impressive.

Ok, are standardized tests not effective to measure how good humans are in the professions they were designed to measure? Fair enough, but what's breathtaking is that GPT 3.5 scored in the bottom 10% percentile and GPT4 in the top 90%, in some tests. The rate of improvement is significant. As far as I'm concerned, LLM's started being a thing in 2018, they were laughable in 2020, and somehow in 2022 they made a big leap forward.

The same pattern has repeated in other domains/applications. It went from being dumber than most humans to smarter than most humans to superhuman really fast. It also developed emergent capabilities that we didn't expect and don't yet fully understand.

It also seems we're currently moving the goalpost of what constitutes something "truly intelligent", and we're definitely past the point of what we might have considered intelligence in a machine 50 years ago (not talking about sentience or consciousness, that's not the point).
What's the requirement or measure for you to consider AI truly/generally intelligent?

The last frontier is AI being capable of scientific thought (ie. if it had all the information up to the start of the XXth century, it would come up with the theory of relativity). At which point it's already too late, which brings me to the next point:

2. The risk is AI becoming smarter than humans and capable of programming itself. If it's capable of recursively improving itself, it can become superintelligent in a very short amount of time. As smarter than humans as we are compared to a frog, maybe more. At that point, do you really think we would be able to control something way smarter than us? Intelligence is power. Whatever way we think to control it, it will probably fail/AI will find a way around it/it will simply work in unintended ways.

Saying that AI is confined to a hard drive is like saying that we're confined to our brains. If AI has access to the internet, it has access to a lot of resources, electronic devices, and people.

It's not necessarily that AI is 'evil' and wants humanity destroyed, it's just that it doesn't care, it doesn't have any morality. It just optimizes for some outcome that we don't understand and doesn't particularly care about the wellbeing of humanity, and will likely use resources that are vital for our survival.

We have empathy because we evolved in an ancestral environment where mutual cooperation was essential to the survival of our genes. Creating a "Friendly AI" is the challenge of instilling morality into something completely alien that didn't 'evolve' in the same environment as us, doesn't feel or think like us, but can appear as it does. What can appear as AI showing empathy or moral sentiments is really AI mimicking human generated text.

Sadly, I agree with your last paragraph, there's no puting the genie out of the bottle, or at least it's very hard to do. But thinking it's impossible to do so is part of the problem. If everyone thinks AI risk is nonsense and everyone else will think that it's impossible to stop the arms race, everyone will keep scaling AI. There's incentive to keep going (losing against competition, being attacked by other countries, etc) and no incentive to stop, since AI risk is fearmongering nonsense. If more people start to believe that AI risk makes sense, more people will have less of an incentive to keep going, AI might kill you even if you win, and other people are starting to believe the some so they're going to stop to.

So, we can try. Or at least we can delay it. Even if it's inevitable, I'd rather have AI doom in 15 years and not 5 years.

I'm a very optimistic person, hope I made my points clear and this doesn't come off and doom nonsense.
The whole "oh what if it can modify itselt" thing is also funny. I tried to make a simple form to Google sheets api thing with AI and after many hours it didn't work.

Looking at the code it writes, it would probably commit instant suicide by changing its own code.
If it's at the point where it can write its own code it's already far past the point where it matters.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

MJ DeMarco

I followed the science; all I found was money.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
446%
Jul 23, 2007
38,258
170,785
Utah
I think AI is going to ramp up the level of distraction people are experiencing on a daily basis.
It's going to increase the information overload, preventing people from doing "deep work."

Imagine trying to digest 20 YT videos in a matter of minutes. With AI, people will attempt it.

Social media was Distraction and Overload 1.0
Add in AI and it's going to be Distraction and Overload 2.0.


I also think entire industries will be redefined and likely, become more commoditized...

Fiction writing is the first thing that comes to mind. I've been wanting to write a fiction book for some time now, but that idea of putting my heart and soul into something (likely a multi-month/year project) that likely will be immersed in a pool of AI written garbage created in days, is disheartening.

While I don't have "anxiety" about AI, I do fear humans will de-evolve because of it, much like humans de-evolved when people got smartphones. Tools can be abused, especially when most people lack emotional intelligence and basic discipline. The average adult fails at the marshmallow experiment, and one look at their waistlines is an utter reflection of that failure.
 

Ismo29

New Contributor
Read Fastlane!
Read Unscripted!
User Power
Value/Post Ratio
119%
Dec 20, 2022
16
19
The only thing I'm anxious about is the EU regulating the hell out of it to the point where only big companies have access to it. I already have 3-4 project ideas that leverage AI.
 

Ismo29

New Contributor
Read Fastlane!
Read Unscripted!
User Power
Value/Post Ratio
119%
Dec 20, 2022
16
19
I think AI is going to ramp up the level of distraction people are experiencing on a daily basis.
It's going to increase the information overload, preventing people from doing "deep work."

Imagine trying to digest 20 YT videos in a matter of minutes. With AI, people will attempt it.

Social media was Distraction and Overload 1.0
Add in AI and it's going to be Distraction and Overload 2.0.


I also think entire industries will be redefined and likely, become more commoditized...

Fiction writing is the first thing that comes to mind. I've been wanting to write a fiction book for some time now, but that idea of putting my heart and soul into something (likely a multi-month/year project) that likely will be immersed in a pool of AI written garbage created in days, is disheartening.

While I don't have "anxiety" about AI, I do fear humans will de-evolve because of it, much like humans de-evolved when people got smartphones. Tools can be abused, especially when most people lack emotional intelligence and basic discipline. The average adult fails at the marshmallow experiment, and one look at their waistlines is an utter reflection of that failure.

Yeah, technology is meant to augment our natural senses, but what ends up happening much of the time is that our natural abilities begin to atrophy as we become reliant on it.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Johnny boy

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Speedway Pass
User Power
Value/Post Ratio
634%
May 9, 2017
3,022
19,169
27
Washington State
AI is going crazy. It’s already doing things I can’t even believe.

They’re teaching shrimp how to fry rice

Honey has learned how to roast barbecue

Stones now know how to grind mustard

I’d like to think I’m pretty smart. I don’t know in the slightest how I would “base” a lubricant, so how the hell does WATER know how to do it?

Proof:
02144A02-088A-44B2-922B-23DC133B27F7.png
 

focusedlife

Bronze Contributor
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
162%
Mar 16, 2013
130
211
Brooklyn, NY
Hello, I am interested in hearing your thoughts on upcoming AI tools. Initially, they were incredible, but now they are causing anxiety. Do you share this feeling, or am I alone in this? The ongoing bettle between Altman and Musk regarding AI, as well as the six-month pause on all AI Labs, have added to my concerns. Furthermore, it appears that some jobs, such as content writing, coding, and graphic design, are already being taken over by AI to some extent. All of this is quite alarming.
If you haven't, check out this book written by Reid Hoffman (cofounder of Linkedin) and ChatGPT4.

I suspect it just might put your mind at ease?

I bought the book, but it's free → here
 

Gabo96

New Contributor
User Power
Value/Post Ratio
129%
Feb 14, 2019
7
9
The whole "oh what if it can modify itselt" thing is also funny. I tried to make a simple form to Google sheets api thing with AI and after many hours it didn't work.

Looking at the code it writes, it would probably commit instant suicide by changing its own code.
If it's at the point where it can write its own code it's already far past the point where it matters.
Less than a year has happened and here we are.

Have you guys seen Sora? If that doesn't blow your mind, I don't know what to tell you.

Even the most optimist people thoúght this quality of video generation was at least a year away. OpenAI itself states this is an important step towards AGI.

Yet "experts" still think AGI is decades away, LOL.

Folks, this doesn't come from the perspective of being scared of innovation and technology. I think those are good, I also think AI is pretty good, at least state-of-the-art narrow AI.

Last year I've leveraged AI to start a web design and video editing business. Now I'm about to launch an online course about Midjourney (image generation AI), probably one of the best in my language. I'm hyped about AI and I'm super grateful of what I can do with it and how it lowers the barrier of entry for many content creatorrs.

I just think we should NOT keep scaling AI past a certain treshold of capabilities without fully understanding how it works and how to control it. And capabilities progress is happening blazingly fast.

I know I'm coming off as a complete looney to some of you. But you will see in the coming months that I'm right.

My goal with this post is to persuade at least ONE of you to take this matter seriously. If one person with certain degree of power and influence starts to believe that AI is the #1 threat we have as a species (not just unemployment or misinformation, but actual existential risk) I'm happy.

This is not some copypaste message I'm writing everywhere. I keep coming here because this forum has changed my life and I know there a lot of intelligent and open minded people here.

Tbh, being quick to dismiss AI as fearmongering is the type of response I would expect from an NPC.

This is one of the few places on the internet where people aren't afraid to go against conventional wisdom AND are willing to do something about it, instead of crying in a corner.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

amp0193

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
442%
May 27, 2013
3,735
16,517
United States
If one person with certain degree of power and influence starts to believe that AI is the #1 threat we have as a species (not just unemployment or misinformation, but actual existential risk) I'm happy.
Musk has been sounding the alarm of AI risks for a decade, and the powers that be are too dense to take any action. No one here at TFLF is going to touch that influence.

Unfortunately, I think cat's out of the bag and we're all just along for the ride, wherever it takes us.

For better or worse, the dams of restraint burst when the OpenAI board caved and put Altman back in the hot seat. Human greed will continue to overpower caution.

In the mean-time I'll just use the tools as they come out to improve the net income of my business and watch in amazement at the changes coming down the pipeline.
 

srodrigo

Gold Contributor
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
131%
Sep 11, 2018
799
1,044
Yet "experts" still think AGI is decades away, LOL.
The problem is many people confuse AGI (Artificial General Intelligence) with ASI (Artificial Super Intelligence). The former is "just" a tool to solve stuff in a broad way. The latter is the scary one, theoretically surpassing humans, even experts at X field. I'm no expert, but the current state of AI looks close to AGI but very far from ASI, so I'm not worried.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

JordanK

Gold Contributor
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
293%
Feb 17, 2014
567
1,662
26
Ireland
There is an interesting trend I have observed over the last 10 years with our current technology that I believe will be very telling with A.I too.

So many of the early innovations that the internet brought forward are being reversed or changed.

Streaming sites destroyed cable/movie rental but now you have to be signed up to 10 different apps with different charges and they are going to show advertisements. Seems regressive.

Social media allowed anyone to create content and is eating away massively at TV audiences but now social media is just becoming TV on a device. Most ordinary people are retreating from posting content and just consuming.

Social media allowed anyone massive reach bypassing TV producers. Social media reach is increasingly algorithmic. Try growing an organic audience on facebook, instagram, x, youtube. Not impossible but definitely harder compared to a few years ago. There are gatekeepers and allowed opinions.

I have many more examples these are just the most obvious.

---

I believe that the new content generation abilities of A.I will completely destroy/end social media and online news.

People's biggest concerns now are disinformation, propaganda etc but what I am witnessing in the real world is a complete retreat from social media. As the internet becomes swarmed with more bots/generated content people will increasingly just delete apps and zone out.

As someone who follows geopolitics closely, I used to intently follow breaking developments on twitter. I have actually returned to watching my nations once daily news broadcast on T.V. It's much simpler than wadding through a whole days bullshit of a developing story then realizing half the information was false to begin with. The turning point was reached for me during the Israel-Gaza conflict. It's probably the first time too that we actively see facebook/instagram trying to tune out news related content as its too contentious.

An interesting discussion for sure.
 

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,727
69,160
Ireland
People's biggest concerns now are disinformation, propaganda etc but what I am witnessing in the real world is a complete retreat from social media. As the internet becomes swarmed with more bots/generated content people will increasingly just delete apps and zone out.

As someone who follows geopolitics closely, I used to intently follow breaking developments on twitter. I have actually returned to watching my nations once daily news broadcast on T.V. It's much simpler than wadding through a whole days bullshit of a developing story then realizing half the information was false to begin with. The turning point was reached for me during the Israel-Gaza conflict. It's probably the first time too that we actively see facebook/instagram trying to tune out news related content as its too contentious.
I'm curious if people are turning back to national news stations after tiring of all the click-bait on social media platforms.

And I'd love to know what content will do well in future on social media platforms when its flooded with AI produced content. Will it be content that can't be faked? What will that be?

It was interesting listening to Perry Marshall's thoughts on AI in the podcast:

He discussed how the internet killed travel agents off as we knew them, but it didn't stop people travelling. Now the travel agents are higher ticket creating bespoke trips for wealthier clientele.

Perry brought up the Lindy Effect and challenged us to think what would stay the same based on how long its already been around. For instance, Google Ads might change but people will always need to find local services and local services will always need to find more clients. Those needs are thousands of years old so will likely last thousands more years.

It might be hard to predict the future of technology, but predicting human behaviour should still be relatively easy. Most people want an easy life. Most people will gnash their teeth and complain about your lot instead of doing something about it.
 

AceVentures

Platinum Contributor
FASTLANE INSIDER
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
406%
Apr 16, 2019
860
3,491
"New tools for man.

Cool.

Man do things easier now.

Good.

Make man life better.

Man use new tool."

ChatGPT help man sound good on internet.

To improve the clarity and impact of the message while maintaining its simplicity and essence, here's a revised version:

"Introducing new tools for humanity.

Innovative.

These tools simplify tasks.

Efficient.

Enhancing the quality of life.

Embrace the advancement."

This revision maintains the original's brevity and positive tone, but it's structured to be more engaging and clear for an online audience.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Kevin88660

Platinum Contributor
FASTLANE INSIDER
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
117%
Feb 8, 2019
3,629
4,254
Southeast Asia
Musk has been sounding the alarm of AI risks for a decade, and the powers that be are too dense to take any action. No one here at TFLF is going to touch that influence.

Unfortunately, I think cat's out of the bag and we're all just along for the ride, wherever it takes us.

For better or worse, the dams of restraint burst when the OpenAI board caved and put Altman back in the hot seat. Human greed will continue to overpower caution.

In the mean-time I'll just use the tools as they come out to improve the net income of my business and watch in amazement at the changes coming down the pipeline.
Musk has been doing a lot of psyops too.

He was asking open ai to accelerate their progress before 2018 while he was still part of it. Then he quit after a power struggle and then began to warn about the risk of ai so that his own competitor project could have time running to catch up with open ai.
 

MakeItHappen

Gold Contributor
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
195%
Apr 12, 2012
647
1,263
Following X-Accounts that tweet about the latest news in AI freaks me out as I project the progress that is being made atm into the future. I have stopped watching the AI news as it makes me feel helpless in a sense because my business could be destroyed by AI in a couple of years. Now I just focus on building my business as that is what I can control. If AI will eat all jobs being an entrepreneur was at least a great personal development vehicle. ;)
 

Jon822

Silver Contributor
Speedway Pass
User Power
Value/Post Ratio
272%
Nov 21, 2016
345
940
33
Our current iteration of AI is not even close to the AI portrayed in movies. As remarkable as they are, you can easily identify weaknesses. For example, ChatGPT would play chess by writing out the moves and in some cases, it would take its own pieces. The point is that each AI is only really good at one particular task - it doesn't actually "learn" anything like humans do.

As far as AI in business is concerned, it's going to create more opportunities in some areas and lower the entry barrier (and, therefore, the potential reward) in others.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

ZackerySprague

Gold Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
119%
Jun 26, 2021
1,258
1,495
Fort Worth, Texas
Adapt or get left behind. It's coming either way.

Doesn't matter what our views are on it. What will happen, will happen.

We can use it for good or use it for evil. It can replace jobs. Mine is being replaced in two years time. Where 75% of the issues will be solved in under 10 mins of an SLA.
 

Kevin88660

Platinum Contributor
FASTLANE INSIDER
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
117%
Feb 8, 2019
3,629
4,254
Southeast Asia
Our current iteration of AI is not even close to the AI portrayed in movies. As remarkable as they are, you can easily identify weaknesses. For example, ChatGPT would play chess by writing out the moves and in some cases, it would take its own pieces. The point is that each AI is only really good at one particular task - it doesn't actually "learn" anything like humans do.

As far as AI in business is concerned, it's going to create more opportunities in some areas and lower the entry barrier (and, therefore, the potential reward) in others.
Ya i am shocked that it cannot even play chess when deep blue could beat the best human player back in year 1997.
 

amp0193

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
442%
May 27, 2013
3,735
16,517
United States
Ya i am shocked that it cannot even play chess when deep blue could beat the best human player back in year 1997.
Not that surprising that a language engine that has existed for a little over a year is not optimized for playing chess.


Also not hard to imagine when it will be capable of that, as chess games are heavily notated and written out and analyzed (although I can't imagine these were heavily prioritized in the base model).

Looks like there are quite a few customGPTs built to be optimized for Chess. I'd be curious to see how they do.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Panos Daras

Silver Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
147%
Oct 10, 2022
432
633
Our current iteration of AI is not even close to the AI portrayed in movies. As remarkable as they are, you can easily identify weaknesses. For example, ChatGPT would play chess by writing out the moves and in some cases, it would take its own pieces. The point is that each AI is only really good at one particular task - it doesn't actually "learn" anything like humans do.

As far as AI in business is concerned, it's going to create more opportunities in some areas and lower the entry barrier (and, therefore, the potential reward) in others.
Very good observation, 100% true.

Large Language Models, such as ChatGPT, are typically trained to handle numerous general-purpose, cognitively challenging tasks on thousands of state-of-the-art specialized processors.

However, there are significant shortcomings when using general LLMs in specialized domains.

Here is an example:
Nokia Language Model and Generative AI
 

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,727
69,160
Ireland
The average adult fails at the marshmallow experiment, and one look at their waistlines is an utter reflection of that failure.
That's a Tweetable.
 

Post New Topic

Please SEARCH before posting.
Please select the BEST category.

Post new topic

Guest post submissions offered HERE.

Latest Posts

New Topics

Fastlane Insiders

View the forum AD FREE.
Private, unindexed content
Detailed process/execution threads
Ideas needing execution, more!

Join Fastlane Insiders.

Top