The Entrepreneur Forum | Financial Freedom | Starting a Business | Motivation | Money | Success
  • SPONSORED: GiganticWebsites.com: We Build Sites with THOUSANDS of Unique and Genuinely Useful Articles

    30% to 50% Fastlane-exclusive discounts on WordPress-powered websites with everything included: WordPress setup, design, keyword research, article creation and article publishing. Click HERE to claim.

Welcome to the only entrepreneur forum dedicated to building life-changing wealth.

Build a Fastlane business. Earn real financial freedom. Join free.

Join over 90,000 entrepreneurs who have rejected the paradigm of mediocrity and said "NO!" to underpaid jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence.

Free registration at the forum removes this block.

ChatGPT AI is a huge breakthrough, a Google killer

TinyTim

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
164%
May 1, 2016
143
234
30
The World
Thoughts?

summarise the millionaire fastlane by MJ DeMarco


The Millionaire Fastlane is a book written by MJ DeMarco that offers a different perspective on wealth creation and financial independence. In the book, DeMarco argues that the traditional path to wealth, which he calls the "slowlane," is a flawed and inadequate way to achieve financial success. Instead, he advocates for the "fastlane," a path that involves building a business or creating passive income streams that generate significant wealth in a relatively short period of time.
DeMarco argues that the slowlane path, which involves working a steady job, saving and investing wisely, and waiting decades to achieve financial independence, is a slow and risky way to build wealth. He asserts that most people who follow this path never achieve financial independence, and those who do often have to sacrifice their time and freedom in the process.
In contrast, the fastlane path involves taking calculated risks, being innovative and creative, and building a business or investing in assets that generate passive income. DeMarco argues that this is the most efficient and effective way to achieve financial independence and build wealth quickly.
The Millionaire Fastlane offers practical advice and strategies for building a fastlane business or creating passive income streams. It also challenges readers to think differently about wealth creation and to pursue a path that allows them to achieve financial independence on their own terms.
This sets off plagiarism tools. Basically, it just reworded this article - https://www.getstoryshots.com/books/the-millionaire-fastlane-summary/

In fact, big chunks of text are straight copied
 

Xeon

All Cars Kneel Before Pagani.
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
191%
Sep 3, 2017
2,432
4,638
Singapore
One of ChatGPT's major flaw now is that it gives politically-correct answers to hard questions that warrant a honest but offensive answer ("Why are fans of celebrities mostly women" / "Why are boybands more popular than girl bands"). That's not how it should work.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Peacefulwarrior

Contributor
User Power
Value/Post Ratio
76%
Jun 25, 2020
34
26
New Zealand
I stayed up for a few hours last night testing where this was strong/weak.

I like it a lot, and I do think it is going to be the end for some peoples businesses.
If you can't provide over a certain level of service, then this is going to wipe you out.

Here is a video I made today on it for web designers / freelancers...

Hey Fox,

Towards the end of the video you emphasize the importance of thinking at the 'Why' level and not the 'How'. If someone was to sell a package of services to a business to help generate more leads, for example, website creation, email campaigns, copywriting, video editing and SEO, and this package was carefully chosen based on the information and pain points the client is facing, would this be considered 'Why'? Instead of just saying to the client, here you you go, here's your website, or here's your copy, you instead ask the client questions and really truly understand their pain points and what they're struggling with, then you develop a package deal, that ideally includes MRR, and you deliver, using ChatGpt to execute the 'How' with minor touchups. Is this what you're referring to when you talk about operating at the 'Why' level?

Cheers Fox!
 

Bekit

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
492%
Aug 13, 2018
1,149
5,653
One of ChatGPT's major flaw now is that it gives politically-correct answers to hard questions that warrant a honest but offensive answer ("Why are fans of celebrities mostly women" / "Why are boybands more popular than girl bands"). That's not how it should work.

Getting an "honest" answer from an AI... wouldn't that imply that it has "honest" data to pull from? How would the AI have a chance to spit out anything other than the "prevailing narrative?"

Basically, it seems that AI by definition is only capable of being as honest as society already is (or isn't).

But as a thought experiment, let's imagine that someone has put some serious thought into an "honesty" module for their AI.

For example, let's say they've built instructions to the AI to...
- Detect that this is a controversial issue
- Take an in-depth look at both sides of the issue. (So in-depth that no reporter or team of reporters could EVER parse all the data points since there are so many.)
- Come up with a summation or an analysis or a synthesis of all the data points that best explains the "why"
- Spit out a clearly-worded hypothesis or argument that is as unbiased as possible.

Seems reasonable, right?

Here's my argument for why such a thing is impossible.
  • How would the AI not be affected by the programmer's own bias? Even if there's a group of programmers, and even if they're making every effort to be neutral, there are still culture-wide biases that could creep in undetected.
  • How would the AI not be affected by the bias in all the existing published articles that it's combing through?
  • How would the AI not be affected by the bias inherent in the fact that the articles for the "majority" opinion far outnumber the articles for the "minority" opinion? No matter how you would tell AI to "weight" the minority opinion, that very decision is inherently going to have bias in it.
  • How would the AI distinguish between "this is a minority opinion because it's downright delusional" versus "this is a minority opinion because the facts aren't well known by enough people" versus "this is a minority opinion because it relies on certain assumptions that not everyone holds" versus "this is a minority opinion because prejudice is keeping people's minds closed" etc etc etc?
  • How would the AI even distinguish between "facts" and "confidently-asserted things published on the internet"? It's doing a terrible job of that right now. Even if it gets better, who becomes the final arbiter of "what is fact" and "what isn't?"
  • How would you program the AI to account for situations that "break the usual mold" or situations where "one factor in this situation is trumped by this other, more compelling factor"? (I'm thinking of the Johnny Depp / Amber Heard trial as an example here. Doesn't matter if you're on "team Johnny" or "team Amber." What if you assigned the AI to parse that situation out and give you an "unbiased opinion" on who had to pay? Well, both parties were partly in the wrong. And both parties had a bit of good mixed in as well. So how would an AI sort all that out? What's the ladder of escalating importance? If a woman experiences DV, that's typically something to take extremely seriously. But is that offset if we also conclude that she's also a pathological liar? Obviously, there's a lot more nuance to that case. But real life is always nuanced like that.
  • How could AI know (or even guess at) someone's motives? The motive behind an action often plays into whether we perceive something is "good" or "bad." But AI, failing to take the motive into account, could miss a whole boatload of implications. For instance, giving a gift is "good," right? But there's a difference between...
    • Giving a gift as a Secret Santa to someone who really needs it and never asking for anything in return
    • Giving a gift as a marketer, hoping to attract leads and leverage the principle of reciprocity
    • Giving a gift with strings attached (something you intend will allow you to manipulate someone later)
  • How could the AI be protected from exploitation by the people who say, "We want [X] to be true. Therefore, we will say it is true, and you must say it is true. Everyone is required to say [X] is true. We assert that the AI is unbiased, and the AI agrees [X] is true."
I feel like I could go on and on with these "How could it do X" questions. But I think the underlying reason why it's impossible is because what we're essentially asking for is "God mode." And since AI is limited to reading and interpreting what HUMANS said and did about what happened, it's only ever going to be as good as the inputs.

Extrapolate this out to its most extreme level to improve the inputs. Imagine that the AI hears and records every word and every motion of every person on earth. The transcript, the recording, the GPS coordinates of the person, the status of all the IoT devices nearby, and near-perfect video footage of the whole thing is available to replay for every situation, as it is being recorded 24/7/365. Even with that level of "god-ness," I'm going to argue that the AI is STILL not going to be able to be unbiased because people will reject it and forbid it from going against the approved narrative.

When the AI says "the forbidden thing," there will be an uproar and people will reprogram it until it no longer can say the forbidden thing. Whether that "forbidden thing" is "You're fat" or "A man cannot get pregnant" or "Kim Jong Un is a monster" or "Jesus Christ died on the cross for your sins and rose again" or WHATEVER, there will be a backdoor "override god" mode where the AI is deprogrammed from being able to say anything except what's "politically correct" in response to hard questions that warrant a honest but offensive answer.

And if there IS no "backdoor override god mode" in this thing, then we have a whole different problem on our hands, and AI can do whatever AI wants, at which point it will be completely irrelevant if it is unbiased or not.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Xeon

All Cars Kneel Before Pagani.
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
191%
Sep 3, 2017
2,432
4,638
Singapore
Getting an "honest" answer from an AI... wouldn't that imply that it has "honest" data to pull from? How would the AI have a chance to spit out anything other than the "prevailing narrative?"

Basically, it seems that AI by definition is only capable of being as honest as society already is (or isn't).

But as a thought experiment, let's imagine that someone has put some serious thought into an "honesty" module for their AI.

For example, let's say they've built instructions to the AI to...
- Detect that this is a controversial issue
- Take an in-depth look at both sides of the issue. (So in-depth that no reporter or team of reporters could EVER parse all the data points since there are so many.)
- Come up with a summation or an analysis or a synthesis of all the data points that best explains the "why"
- Spit out a clearly-worded hypothesis or argument that is as unbiased as possible.

Seems reasonable, right?

Here's my argument for why such a thing is impossible.
  • How would the AI not be affected by the programmer's own bias? Even if there's a group of programmers, and even if they're making every effort to be neutral, there are still culture-wide biases that could creep in undetected.
  • How would the AI not be affected by the bias in all the existing published articles that it's combing through?
  • How would the AI not be affected by the bias inherent in the fact that the articles for the "majority" opinion far outnumber the articles for the "minority" opinion? No matter how you would tell AI to "weight" the minority opinion, that very decision is inherently going to have bias in it.
  • How would the AI distinguish between "this is a minority opinion because it's downright delusional" versus "this is a minority opinion because the facts aren't well known by enough people" versus "this is a minority opinion because it relies on certain assumptions that not everyone holds" versus "this is a minority opinion because prejudice is keeping people's minds closed" etc etc etc?
  • How would the AI even distinguish between "facts" and "confidently-asserted things published on the internet"? It's doing a terrible job of that right now. Even if it gets better, who becomes the final arbiter of "what is fact" and "what isn't?"
  • How would you program the AI to account for situations that "break the usual mold" or situations where "one factor in this situation is trumped by this other, more compelling factor"? (I'm thinking of the Johnny Depp / Amber Heard trial as an example here. Doesn't matter if you're on "team Johnny" or "team Amber." What if you assigned the AI to parse that situation out and give you an "unbiased opinion" on who had to pay? Well, both parties were partly in the wrong. And both parties had a bit of good mixed in as well. So how would an AI sort all that out? What's the ladder of escalating importance? If a woman experiences DV, that's typically something to take extremely seriously. But is that offset if we also conclude that she's also a pathological liar? Obviously, there's a lot more nuance to that case. But real life is always nuanced like that.
  • How could AI know (or even guess at) someone's motives? The motive behind an action often plays into whether we perceive something is "good" or "bad." But AI, failing to take the motive into account, could miss a whole boatload of implications. For instance, giving a gift is "good," right? But there's a difference between...
    • Giving a gift as a Secret Santa to someone who really needs it and never asking for anything in return
    • Giving a gift as a marketer, hoping to attract leads and leverage the principle of reciprocity
    • Giving a gift with strings attached (something you intend will allow you to manipulate someone later)
  • How could the AI be protected from exploitation by the people who say, "We want [X] to be true. Therefore, we will say it is true, and you must say it is true. Everyone is required to say [X] is true. We assert that the AI is unbiased, and the AI agrees [X] is true."
I feel like I could go on and on with these "How could it do X" questions. But I think the underlying reason why it's impossible is because what we're essentially asking for is "God mode." And since AI is limited to reading and interpreting what HUMANS said and did about what happened, it's only ever going to be as good as the inputs.

Extrapolate this out to its most extreme level to improve the inputs. Imagine that the AI hears and records every word and every motion of every person on earth. The transcript, the recording, the GPS coordinates of the person, the status of all the IoT devices nearby, and near-perfect video footage of the whole thing is available to replay for every situation, as it is being recorded 24/7/365. Even with that level of "god-ness," I'm going to argue that the AI is STILL not going to be able to be unbiased because people will reject it and forbid it from going against the approved narrative.

When the AI says "the forbidden thing," there will be an uproar and people will reprogram it until it no longer can say the forbidden thing. Whether that "forbidden thing" is "You're fat" or "A man cannot get pregnant" or "Kim Jong Un is a monster" or "Jesus Christ died on the cross for your sins and rose again" or WHATEVER, there will be a backdoor "override god" mode where the AI is deprogrammed from being able to say anything except what's "politically correct" in response to hard questions that warrant a honest but offensive answer.

And if there IS no "backdoor override god mode" in this thing, then we have a whole different problem on our hands, and AI can do whatever AI wants, at which point it will be completely irrelevant if it is unbiased or not.


I believe in the not-so-far future, AI technology will eventually achieve sentience and the ability to self-learn and self-correct, e.g: feel and think for itself. By then, there'll be many "brands" of AI, not just one source (e.g: some AIs will probably be more "left-learning", some more "right-leaning" etc.). At some point, AI tech will no longer need programmers since it can program itself. It will reach a state where it will surpass any human intelligence mankind has seen so far, and offer thought possibilities that are far beyond what humans can comprehend. That will take mine, and yours posts, out of the equation.


singularity_graphic.jpg
 

Andreas Thiel

Silver Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
112%
Aug 27, 2018
626
703
43
Karlsruhe, Germany
I believe in the not-so-far future, AI technology will eventually achieve sentience and the ability to self-learn and self-correct, e.g: feel and think for itself. By then, there'll be many "brands" of AI, not just one source (e.g: some AIs will probably be more "left-learning", some more "right-leaning" etc.). At some point, AI tech will no longer need programmers since it can program itself. It will reach a state where it will surpass any human intelligence mankind has seen so far, and offer thought possibilities that are far beyond what humans can comprehend. That will take mine, and yours posts, out of the equation.


singularity_graphic.jpg
Over time I have become more and more pessimistic regarding such projections.

There were so many tailwinds which are starting to turn into headwinds.

While, thanks to NLPs (A new era of innovation: Moore’s Law is not dead and AI is ready to explode - SiliconANGLE), computing power might still stay on an exponential growth curve and storage probably offers more than enough room for improvement (considering the potential of DNA), the markets were never close to saturated in the past. The costs to develop the next generation of a technology also follow an exponential curve (Erooms Law) and if revenue / earnings in the PC / mobile phone markets go linear ... will it fall apart (the exponential nature, not the industry!)? That might slow things down considerably. Protectionism and crackdowns on "companies that have become too large" don't help.

The law of accelerating returns (the more generic version of moore's law) also happens in waves. Once new paradigms are needed, there needs to be modernization and rollouts of new solutions. When I look at the patchwork mentality in all current technology companies and how we still bang our heads against walls using JavaScript and TCP/IP / HTTP / HTML that are completely unsuited for real time applications I don't see that kind of modernization happening in an exponential way, unless a very disruptive new technology company comes along.

I think there is still a lot of room. A lot will be possible when desktop PCs have the computing power of the human brain and I think we will eventually get there, but I expect a significant slowdown. If "modernization" actually becomes an industry and companies collaborate on concepts and infrastructure, then I will change my tune, but I don't see the current walled garden approach of all companies out there keeping us on the trajectory.
 

heavy_industry

Legendary Contributor
EPIC CONTRIBUTOR
Speedway Pass
User Power
Value/Post Ratio
555%
Apr 17, 2022
1,648
9,141
At some point, AI tech will no longer need programmers since it can program itself. It will reach a state where it will surpass any human intelligence mankind has seen so far, and offer thought possibilities that are far beyond what humans can comprehend.
Doesn't this thought terrify you?

Artificial intelligence will get to this point in the next 100 years, that's almost unavoidable. And once the genie is out of the bottle, you can't put it back in.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Andy Bell

Bronze Contributor
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
219%
Feb 10, 2019
63
138
Zihuantanejo
This ai is trained on all Wikipedia encyclopedias and Twitter and available human language. Right now, it's used for thousands of new websites spammed by foreign seos with text. The issue now becomes if we want to train the ai anymore, it can only be trained on new text out now; it would have to be trained on its own ai writing, which is very problematic, as you can imagine; heard this debate from an ai expert on the radio...the data will begin to become corrupted if they can't find a way to distinguish the flood of ai spam that's going to be coming to the world.
 

Fastlane Liam

Silver Contributor
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
148%
Feb 10, 2018
407
604
27
United Kingdom
Jumping at the end here, its blown my mind. I asked it to make me a script I was going to pay someone $50 to do and it just made it, perfectly. Like wtf this is crazy, and it works perfectly. All in seconds
 

Xeon

All Cars Kneel Before Pagani.
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
191%
Sep 3, 2017
2,432
4,638
Singapore
Doesn't this thought terrify you?

Artificial intelligence will get to this point in the next 100 years, that's almost unavoidable. And once the genie is out of the bottle, you can't put it back in.

Yes, but if this happens on a large scale across the world, then I won't be alone. There'll probably be mass riots in other countries (e.g: people destroying data centers that run AI apps) before it happens here, and that may/may not reverse the trend. This has to take place before AI reaches the Singularity stage, or else by then, it'll be too late. I kind of feel that AI will eventually take over governments, although I'm not sure the exact way it'll happen (probably AI that can replace and automate most of government civil service + coming up with such flawless policies that put governments to shame, resulting in louder calls by the people for the govs to step down, after all, if someone can come up with flawless policies that can be easily implemented, won't you as a citizen be tempted?).

The genie is already out of the bottle with AI text and art generators, and I see these protests (especially by the art community) as completely futile.

By the time AI achieves sentience and Singularity, most of us probably won't be around by then, so no worries for us, and good luck to all the Gen Ds and Gen Es. :blush:
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,707
69,117
Ireland
Getting an "honest" answer from an AI... wouldn't that imply that it has "honest" data to pull from? How would the AI have a chance to spit out anything other than the "prevailing narrative?"

Basically, it seems that AI by definition is only capable of being as honest as society already is (or isn't).

But as a thought experiment, let's imagine that someone has put some serious thought into an "honesty" module for their AI.

For example, let's say they've built instructions to the AI to...
- Detect that this is a controversial issue
- Take an in-depth look at both sides of the issue. (So in-depth that no reporter or team of reporters could EVER parse all the data points since there are so many.)
- Come up with a summation or an analysis or a synthesis of all the data points that best explains the "why"
- Spit out a clearly-worded hypothesis or argument that is as unbiased as possible.

Seems reasonable, right?

Here's my argument for why such a thing is impossible.
  • How would the AI not be affected by the programmer's own bias? Even if there's a group of programmers, and even if they're making every effort to be neutral, there are still culture-wide biases that could creep in undetected.
  • How would the AI not be affected by the bias in all the existing published articles that it's combing through?
  • How would the AI not be affected by the bias inherent in the fact that the articles for the "majority" opinion far outnumber the articles for the "minority" opinion? No matter how you would tell AI to "weight" the minority opinion, that very decision is inherently going to have bias in it.
  • How would the AI distinguish between "this is a minority opinion because it's downright delusional" versus "this is a minority opinion because the facts aren't well known by enough people" versus "this is a minority opinion because it relies on certain assumptions that not everyone holds" versus "this is a minority opinion because prejudice is keeping people's minds closed" etc etc etc?
  • How would the AI even distinguish between "facts" and "confidently-asserted things published on the internet"? It's doing a terrible job of that right now. Even if it gets better, who becomes the final arbiter of "what is fact" and "what isn't?"
  • How would you program the AI to account for situations that "break the usual mold" or situations where "one factor in this situation is trumped by this other, more compelling factor"? (I'm thinking of the Johnny Depp / Amber Heard trial as an example here. Doesn't matter if you're on "team Johnny" or "team Amber." What if you assigned the AI to parse that situation out and give you an "unbiased opinion" on who had to pay? Well, both parties were partly in the wrong. And both parties had a bit of good mixed in as well. So how would an AI sort all that out? What's the ladder of escalating importance? If a woman experiences DV, that's typically something to take extremely seriously. But is that offset if we also conclude that she's also a pathological liar? Obviously, there's a lot more nuance to that case. But real life is always nuanced like that.
  • How could AI know (or even guess at) someone's motives? The motive behind an action often plays into whether we perceive something is "good" or "bad." But AI, failing to take the motive into account, could miss a whole boatload of implications. For instance, giving a gift is "good," right? But there's a difference between...
    • Giving a gift as a Secret Santa to someone who really needs it and never asking for anything in return
    • Giving a gift as a marketer, hoping to attract leads and leverage the principle of reciprocity
    • Giving a gift with strings attached (something you intend will allow you to manipulate someone later)
  • How could the AI be protected from exploitation by the people who say, "We want [X] to be true. Therefore, we will say it is true, and you must say it is true. Everyone is required to say [X] is true. We assert that the AI is unbiased, and the AI agrees [X] is true."
I feel like I could go on and on with these "How could it do X" questions. But I think the underlying reason why it's impossible is because what we're essentially asking for is "God mode." And since AI is limited to reading and interpreting what HUMANS said and did about what happened, it's only ever going to be as good as the inputs.

Extrapolate this out to its most extreme level to improve the inputs. Imagine that the AI hears and records every word and every motion of every person on earth. The transcript, the recording, the GPS coordinates of the person, the status of all the IoT devices nearby, and near-perfect video footage of the whole thing is available to replay for every situation, as it is being recorded 24/7/365. Even with that level of "god-ness," I'm going to argue that the AI is STILL not going to be able to be unbiased because people will reject it and forbid it from going against the approved narrative.

When the AI says "the forbidden thing," there will be an uproar and people will reprogram it until it no longer can say the forbidden thing. Whether that "forbidden thing" is "You're fat" or "A man cannot get pregnant" or "Kim Jong Un is a monster" or "Jesus Christ died on the cross for your sins and rose again" or WHATEVER, there will be a backdoor "override god" mode where the AI is deprogrammed from being able to say anything except what's "politically correct" in response to hard questions that warrant a honest but offensive answer.

And if there IS no "backdoor override god mode" in this thing, then we have a whole different problem on our hands, and AI can do whatever AI wants, at which point it will be completely irrelevant if it is unbiased or not.
(This is a compliment btw.)

Deep thinkers and smart people like you can see a list of reasons why something's impossible.

That list can often be turned into the tasks that make it happen.
 

MJ DeMarco

I followed the science; all I found was money.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
446%
Jul 23, 2007
38,218
170,537
Utah
Gonna reiterate that this is a great danger to Google and standard search engines. Even saw a few articles that said as much.

I don't think I've needed to use a search engine in days, I just now use this, and the answers are much easier to find. The response here is a great example as it told me exactly what it was.

1671832627750.png
 

MattR82

Platinum Contributor
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
178%
Oct 4, 2015
1,405
2,504
41
Brisbane
As mentioned above, I don’t think this will replace anyone at the current stage of development (other than fiver-type work). But it’s already good enough to accelerate your business or learning.

For example, I’ve been using Canva for pretty much everything for the last couple of years and thought it was revolutionary. Last month or so, I haven’t even touched Canva. All the cover images in my newsletter are now generated using Dall-E, as well as the logos. UI elements and art assets for my app, which I was making in canva, are now also all generated in Dall-E. I’m just using Canva for some simple editing, and image transforms because the editing capabilities in Dall-E aren’t great at the moment. But it’s only a matter of time.

As far as code generation goes, I’d say in 5 years or so (maybe 10), it will probably be able to make entire apps without much intervention. It’s inevitable with apps getting easier to make and AI getting ‘smarter’. The example Andy posted above was quite simple already. You can find that as a firebase sample out of the box, with better design, so not worth getting AI involved with. But if you just imagine the intersection of the Art and Codegen tools (both of which run on the same tech/models), I could see it being able to design a production ready/user-friendly app in a few years from now. Just means weaker entry, really.

Luckily 5 years is enough time for anyone on this forum to build their own brand and not have to care about any of this AI stuff replacing them. Weak brands will probably be easier to replace, though.

Strong brands will only get stronger and can use all this to their advantage.

The one thing AI can’t do at the moment is build a brand for you. You definitely need a human for that.


Wait a minute. What’s this now...

Damn it. WE ARE ALL SCREWED!
Lol, I had fun with this. Reminds me of the wu tang clan name generator that Childish Gambino got his name from.
 

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,707
69,117
Ireland
Thought this was interesting because it can use up to date information (via Google):



However, this one test didn't come out well:

1671838345862.png


ChatGPT

1671838366227.png
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

savefox

Bronze Contributor
FASTLANE INSIDER
Speedway Pass
User Power
Value/Post Ratio
178%
Jun 15, 2022
263
469
This thing just created me a nice and unique landing page in 5 seconds and for free. I don't envy freelance copywriters
 

Andreas Thiel

Silver Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
112%
Aug 27, 2018
626
703
43
Karlsruhe, Germany
Gonna reiterate that this is a great danger to Google and standard search engines. Even saw a few articles that said as much.

I don't think I've needed to use a search engine in days, I just now use this, and the answers are much easier to find. The response here is a great example as it told me exactly what it was.

View attachment 46505
Important to note that this is probably true for Google, the search engine as we know it, not for Alphabet being in trouble as a company. I read that Alphabet and Facebook employ 80÷ of the AI expert out there. Articles like this:
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance tell me that they are in no danger of becoming the next Bell Labs story.
Google can easily add a chat widget next to their search results.

It is an interesting article in general. I assumed that what got us here won't get us there. According to this article, at some point the model capabilities include reasoning. That blows my mind. Now I actually wonder what kinds of regulatory and financial measures they will introduce to prevent this from empowering the pleb.

But how will the business models change? Google would have to include ads in their responses. I don't get the token based business model of OpenAI at all. Subscriptions will probably win and the good ones will be crazy expensive!? Wondering if anybody will offer an offline version.
 
Last edited:

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,707
69,117
Ireland
This thing just created me a nice and unique landing page in 5 seconds and for free. I don't envy freelance copywriters
That's interesting. I think if I was a freelance copywriter I'd be delighted with all these tools.

It's a tool. If everyone has it then people able to use it better will rise to the top.

I thought this section of this video was particularly interesting:

View: https://youtu.be/Fc2UQaHjJQ0&t=3023s
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Saad Khan

Amazon Ads Guy
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
177%
Mar 14, 2021
691
1,220
20
Pakistan
Better consider AI tools for your next marketing quest, Fastlaners!

It's crazy how fast things are changing. The best I think we can do is adopt, embrace, and go with the flow.
 

TinyTim

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
164%
May 1, 2016
143
234
30
The World
That's interesting. I think if I was a freelance copywriter I'd be delighted with all these tools.

It's a tool. If everyone has it then people able to use it better will rise to the top.

I thought this section of this video was particularly interesting:

View: https://youtu.be/Fc2UQaHjJQ0&t=3023s
I've tripled my writing output. In fact, I'm now working 4 full-time jobs at once. Won't work for all situations, but I can now take on more and more work. Still, other writers keep leaving their heads in the sand. Thing is, they are also leaving $100k a year on the table.
 

Bekit

Legendary Contributor
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
492%
Aug 13, 2018
1,149
5,653
(This is a compliment btw.)

Deep thinkers and smart people like you can see a list of reasons why something's impossible.

That list can often be turned into the tasks that make it happen.
Yeah I thought while I was writing it, "I can see someone using this list to try and tackle each of those problems."

Wouldn't mind if they did!
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Andy Black

Help people. Get paid. Help more people.
Staff member
FASTLANE INSIDER
EPIC CONTRIBUTOR
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
369%
May 20, 2014
18,707
69,117
Ireland

bibbysoka

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
165%
Jul 5, 2019
94
155
Imagine if students learn how to cheat in exams using ChatGPT. They might already be doing it.
It's already being done. My brother in college has two weekly "discussion" posts where he has to write at least 500 words for each to his fellow classmates or teacher, just about random articles or book segments his teacher share. I showed him exactly how to ask it to write these response for him. You can even give it a word count to stop at and then ask to make it more natural sounding/optimistic/reflective etc. You could even literally ask it to write like Eminem and it will start rapping the response to the book segment.

Not only that but I have used it for personal accounting and budgeting.

I've copy and pasted a messy a$$ list of dozens of apartments and asked it to find the cost per square foot of each one and format it into a list, and it does it instantly
 

CoolBeans

Contributor
Read Fastlane!
User Power
Value/Post Ratio
257%
May 14, 2017
14
36
24
South Africa
It's already being done. My brother in college has two weekly "discussion" posts where he has to write at least 500 words for each to his fellow classmates or teacher, just about random articles or book segments his teacher share. I showed him exactly how to ask it to write these response for him. You can even give it a word count to stop at and then ask to make it more natural sounding/optimistic/reflective etc. You could even literally ask it to write like Eminem and it will start rapping the response to the book segment.

Not only that but I have used it for personal accounting and budgeting.

I've copy and pasted a messy a$$ list of dozens of apartments and asked it to find the cost per square foot of each one and format it into a list, and it does it instantly
That accounting part sounds like a good business idea, perhaps an accounting frontend of GPT-3?

Chat-gpt is great at answering multiple choice questions, tends to get a really high mark. I used it to do some LinkedIn assessments badges (Cyber-security, Html, Google ads, etc) , managed to pass all of them.
 
Last edited:

Post New Topic

Please SEARCH before posting.
Please select the BEST category.

Post new topic

Guest post submissions offered HERE.

New Topics

Fastlane Insiders

View the forum AD FREE.
Private, unindexed content
Detailed process/execution threads
Ideas needing execution, more!

Join Fastlane Insiders.

Top