Imagine if students learn how to cheat in exams using ChatGPT. They might already be doing it.
Dislike ads? Remove them and support the forum:
Subscribe to Fastlane Insiders.
Upcoming Live Fastlane Calls (FREE!)
Inventors Virtual Meetup (FREE - All welcome!): Sunday, May, 5th 2024: 11 AM ESTJoin over 90,000 entrepreneurs who have rejected the paradigm of mediocrity and said "NO!" to underpaid jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence.
Free registration at the forum removes this block.This sets off plagiarism tools. Basically, it just reworded this article - https://www.getstoryshots.com/books/the-millionaire-fastlane-summary/Thoughts?
summarise the millionaire fastlane by MJ DeMarco
The Millionaire Fastlane is a book written by MJ DeMarco that offers a different perspective on wealth creation and financial independence. In the book, DeMarco argues that the traditional path to wealth, which he calls the "slowlane," is a flawed and inadequate way to achieve financial success. Instead, he advocates for the "fastlane," a path that involves building a business or creating passive income streams that generate significant wealth in a relatively short period of time.
DeMarco argues that the slowlane path, which involves working a steady job, saving and investing wisely, and waiting decades to achieve financial independence, is a slow and risky way to build wealth. He asserts that most people who follow this path never achieve financial independence, and those who do often have to sacrifice their time and freedom in the process.
In contrast, the fastlane path involves taking calculated risks, being innovative and creative, and building a business or investing in assets that generate passive income. DeMarco argues that this is the most efficient and effective way to achieve financial independence and build wealth quickly.
The Millionaire Fastlane offers practical advice and strategies for building a fastlane business or creating passive income streams. It also challenges readers to think differently about wealth creation and to pursue a path that allows them to achieve financial independence on their own terms.
Hey Fox,I stayed up for a few hours last night testing where this was strong/weak.
I like it a lot, and I do think it is going to be the end for some peoples businesses.
If you can't provide over a certain level of service, then this is going to wipe you out.
Here is a video I made today on it for web designers / freelancers...
One of ChatGPT's major flaw now is that it gives politically-correct answers to hard questions that warrant a honest but offensive answer ("Why are fans of celebrities mostly women" / "Why are boybands more popular than girl bands"). That's not how it should work.
Getting an "honest" answer from an AI... wouldn't that imply that it has "honest" data to pull from? How would the AI have a chance to spit out anything other than the "prevailing narrative?"
Basically, it seems that AI by definition is only capable of being as honest as society already is (or isn't).
But as a thought experiment, let's imagine that someone has put some serious thought into an "honesty" module for their AI.
For example, let's say they've built instructions to the AI to...
- Detect that this is a controversial issue
- Take an in-depth look at both sides of the issue. (So in-depth that no reporter or team of reporters could EVER parse all the data points since there are so many.)
- Come up with a summation or an analysis or a synthesis of all the data points that best explains the "why"
- Spit out a clearly-worded hypothesis or argument that is as unbiased as possible.
Seems reasonable, right?
Here's my argument for why such a thing is impossible.
I feel like I could go on and on with these "How could it do X" questions. But I think the underlying reason why it's impossible is because what we're essentially asking for is "God mode." And since AI is limited to reading and interpreting what HUMANS said and did about what happened, it's only ever going to be as good as the inputs.
- How would the AI not be affected by the programmer's own bias? Even if there's a group of programmers, and even if they're making every effort to be neutral, there are still culture-wide biases that could creep in undetected.
- How would the AI not be affected by the bias in all the existing published articles that it's combing through?
- How would the AI not be affected by the bias inherent in the fact that the articles for the "majority" opinion far outnumber the articles for the "minority" opinion? No matter how you would tell AI to "weight" the minority opinion, that very decision is inherently going to have bias in it.
- How would the AI distinguish between "this is a minority opinion because it's downright delusional" versus "this is a minority opinion because the facts aren't well known by enough people" versus "this is a minority opinion because it relies on certain assumptions that not everyone holds" versus "this is a minority opinion because prejudice is keeping people's minds closed" etc etc etc?
- How would the AI even distinguish between "facts" and "confidently-asserted things published on the internet"? It's doing a terrible job of that right now. Even if it gets better, who becomes the final arbiter of "what is fact" and "what isn't?"
- How would you program the AI to account for situations that "break the usual mold" or situations where "one factor in this situation is trumped by this other, more compelling factor"? (I'm thinking of the Johnny Depp / Amber Heard trial as an example here. Doesn't matter if you're on "team Johnny" or "team Amber." What if you assigned the AI to parse that situation out and give you an "unbiased opinion" on who had to pay? Well, both parties were partly in the wrong. And both parties had a bit of good mixed in as well. So how would an AI sort all that out? What's the ladder of escalating importance? If a woman experiences DV, that's typically something to take extremely seriously. But is that offset if we also conclude that she's also a pathological liar? Obviously, there's a lot more nuance to that case. But real life is always nuanced like that.
- How could AI know (or even guess at) someone's motives? The motive behind an action often plays into whether we perceive something is "good" or "bad." But AI, failing to take the motive into account, could miss a whole boatload of implications. For instance, giving a gift is "good," right? But there's a difference between...
- Giving a gift as a Secret Santa to someone who really needs it and never asking for anything in return
- Giving a gift as a marketer, hoping to attract leads and leverage the principle of reciprocity
- Giving a gift with strings attached (something you intend will allow you to manipulate someone later)
- How could the AI be protected from exploitation by the people who say, "We want [X] to be true. Therefore, we will say it is true, and you must say it is true. Everyone is required to say [X] is true. We assert that the AI is unbiased, and the AI agrees [X] is true."
Extrapolate this out to its most extreme level to improve the inputs. Imagine that the AI hears and records every word and every motion of every person on earth. The transcript, the recording, the GPS coordinates of the person, the status of all the IoT devices nearby, and near-perfect video footage of the whole thing is available to replay for every situation, as it is being recorded 24/7/365. Even with that level of "god-ness," I'm going to argue that the AI is STILL not going to be able to be unbiased because people will reject it and forbid it from going against the approved narrative.
When the AI says "the forbidden thing," there will be an uproar and people will reprogram it until it no longer can say the forbidden thing. Whether that "forbidden thing" is "You're fat" or "A man cannot get pregnant" or "Kim Jong Un is a monster" or "Jesus Christ died on the cross for your sins and rose again" or WHATEVER, there will be a backdoor "override god" mode where the AI is deprogrammed from being able to say anything except what's "politically correct" in response to hard questions that warrant a honest but offensive answer.
And if there IS no "backdoor override god mode" in this thing, then we have a whole different problem on our hands, and AI can do whatever AI wants, at which point it will be completely irrelevant if it is unbiased or not.
Over time I have become more and more pessimistic regarding such projections.I believe in the not-so-far future, AI technology will eventually achieve sentience and the ability to self-learn and self-correct, e.g: feel and think for itself. By then, there'll be many "brands" of AI, not just one source (e.g: some AIs will probably be more "left-learning", some more "right-leaning" etc.). At some point, AI tech will no longer need programmers since it can program itself. It will reach a state where it will surpass any human intelligence mankind has seen so far, and offer thought possibilities that are far beyond what humans can comprehend. That will take mine, and yours posts, out of the equation.
Doesn't this thought terrify you?At some point, AI tech will no longer need programmers since it can program itself. It will reach a state where it will surpass any human intelligence mankind has seen so far, and offer thought possibilities that are far beyond what humans can comprehend.
Doesn't this thought terrify you?
Artificial intelligence will get to this point in the next 100 years, that's almost unavoidable. And once the genie is out of the bottle, you can't put it back in.
(This is a compliment btw.)Getting an "honest" answer from an AI... wouldn't that imply that it has "honest" data to pull from? How would the AI have a chance to spit out anything other than the "prevailing narrative?"
Basically, it seems that AI by definition is only capable of being as honest as society already is (or isn't).
But as a thought experiment, let's imagine that someone has put some serious thought into an "honesty" module for their AI.
For example, let's say they've built instructions to the AI to...
- Detect that this is a controversial issue
- Take an in-depth look at both sides of the issue. (So in-depth that no reporter or team of reporters could EVER parse all the data points since there are so many.)
- Come up with a summation or an analysis or a synthesis of all the data points that best explains the "why"
- Spit out a clearly-worded hypothesis or argument that is as unbiased as possible.
Seems reasonable, right?
Here's my argument for why such a thing is impossible.
I feel like I could go on and on with these "How could it do X" questions. But I think the underlying reason why it's impossible is because what we're essentially asking for is "God mode." And since AI is limited to reading and interpreting what HUMANS said and did about what happened, it's only ever going to be as good as the inputs.
- How would the AI not be affected by the programmer's own bias? Even if there's a group of programmers, and even if they're making every effort to be neutral, there are still culture-wide biases that could creep in undetected.
- How would the AI not be affected by the bias in all the existing published articles that it's combing through?
- How would the AI not be affected by the bias inherent in the fact that the articles for the "majority" opinion far outnumber the articles for the "minority" opinion? No matter how you would tell AI to "weight" the minority opinion, that very decision is inherently going to have bias in it.
- How would the AI distinguish between "this is a minority opinion because it's downright delusional" versus "this is a minority opinion because the facts aren't well known by enough people" versus "this is a minority opinion because it relies on certain assumptions that not everyone holds" versus "this is a minority opinion because prejudice is keeping people's minds closed" etc etc etc?
- How would the AI even distinguish between "facts" and "confidently-asserted things published on the internet"? It's doing a terrible job of that right now. Even if it gets better, who becomes the final arbiter of "what is fact" and "what isn't?"
- How would you program the AI to account for situations that "break the usual mold" or situations where "one factor in this situation is trumped by this other, more compelling factor"? (I'm thinking of the Johnny Depp / Amber Heard trial as an example here. Doesn't matter if you're on "team Johnny" or "team Amber." What if you assigned the AI to parse that situation out and give you an "unbiased opinion" on who had to pay? Well, both parties were partly in the wrong. And both parties had a bit of good mixed in as well. So how would an AI sort all that out? What's the ladder of escalating importance? If a woman experiences DV, that's typically something to take extremely seriously. But is that offset if we also conclude that she's also a pathological liar? Obviously, there's a lot more nuance to that case. But real life is always nuanced like that.
- How could AI know (or even guess at) someone's motives? The motive behind an action often plays into whether we perceive something is "good" or "bad." But AI, failing to take the motive into account, could miss a whole boatload of implications. For instance, giving a gift is "good," right? But there's a difference between...
- Giving a gift as a Secret Santa to someone who really needs it and never asking for anything in return
- Giving a gift as a marketer, hoping to attract leads and leverage the principle of reciprocity
- Giving a gift with strings attached (something you intend will allow you to manipulate someone later)
- How could the AI be protected from exploitation by the people who say, "We want [X] to be true. Therefore, we will say it is true, and you must say it is true. Everyone is required to say [X] is true. We assert that the AI is unbiased, and the AI agrees [X] is true."
Extrapolate this out to its most extreme level to improve the inputs. Imagine that the AI hears and records every word and every motion of every person on earth. The transcript, the recording, the GPS coordinates of the person, the status of all the IoT devices nearby, and near-perfect video footage of the whole thing is available to replay for every situation, as it is being recorded 24/7/365. Even with that level of "god-ness," I'm going to argue that the AI is STILL not going to be able to be unbiased because people will reject it and forbid it from going against the approved narrative.
When the AI says "the forbidden thing," there will be an uproar and people will reprogram it until it no longer can say the forbidden thing. Whether that "forbidden thing" is "You're fat" or "A man cannot get pregnant" or "Kim Jong Un is a monster" or "Jesus Christ died on the cross for your sins and rose again" or WHATEVER, there will be a backdoor "override god" mode where the AI is deprogrammed from being able to say anything except what's "politically correct" in response to hard questions that warrant a honest but offensive answer.
And if there IS no "backdoor override god mode" in this thing, then we have a whole different problem on our hands, and AI can do whatever AI wants, at which point it will be completely irrelevant if it is unbiased or not.
You'll be hearing from my bot!My bot vs your bot?
Lol, I had fun with this. Reminds me of the wu tang clan name generator that Childish Gambino got his name from.As mentioned above, I don’t think this will replace anyone at the current stage of development (other than fiver-type work). But it’s already good enough to accelerate your business or learning.
For example, I’ve been using Canva for pretty much everything for the last couple of years and thought it was revolutionary. Last month or so, I haven’t even touched Canva. All the cover images in my newsletter are now generated using Dall-E, as well as the logos. UI elements and art assets for my app, which I was making in canva, are now also all generated in Dall-E. I’m just using Canva for some simple editing, and image transforms because the editing capabilities in Dall-E aren’t great at the moment. But it’s only a matter of time.
As far as code generation goes, I’d say in 5 years or so (maybe 10), it will probably be able to make entire apps without much intervention. It’s inevitable with apps getting easier to make and AI getting ‘smarter’. The example Andy posted above was quite simple already. You can find that as a firebase sample out of the box, with better design, so not worth getting AI involved with. But if you just imagine the intersection of the Art and Codegen tools (both of which run on the same tech/models), I could see it being able to design a production ready/user-friendly app in a few years from now. Just means weaker entry, really.
Luckily 5 years is enough time for anyone on this forum to build their own brand and not have to care about any of this AI stuff replacing them. Weak brands will probably be easier to replace, though.
Strong brands will only get stronger and can use all this to their advantage.
The one thing AI can’t do at the moment is build a brand for you. You definitely need a human for that.
Business Name Generator - Easily create Brandable Business Names - Namelix
Namelix uses generative AI to create a short, brandable business name. Search for domain availability, and instantly generate a logo for your new businessnamelix.com
Wait a minute. What’s this now...
Damn it. WE ARE ALL SCREWED!
Important to note that this is probably true for Google, the search engine as we know it, not for Alphabet being in trouble as a company. I read that Alphabet and Facebook employ 80÷ of the AI expert out there. Articles like this:Gonna reiterate that this is a great danger to Google and standard search engines. Even saw a few articles that said as much.
I don't think I've needed to use a search engine in days, I just now use this, and the answers are much easier to find. The response here is a great example as it told me exactly what it was.
View attachment 46505
That's interesting. I think if I was a freelance copywriter I'd be delighted with all these tools.This thing just created me a nice and unique landing page in 5 seconds and for free. I don't envy freelance copywriters
I've tripled my writing output. In fact, I'm now working 4 full-time jobs at once. Won't work for all situations, but I can now take on more and more work. Still, other writers keep leaving their heads in the sand. Thing is, they are also leaving $100k a year on the table.That's interesting. I think if I was a freelance copywriter I'd be delighted with all these tools.
It's a tool. If everyone has it then people able to use it better will rise to the top.
I thought this section of this video was particularly interesting:
View: https://youtu.be/Fc2UQaHjJQ0&t=3023s
Yeah I thought while I was writing it, "I can see someone using this list to try and tackle each of those problems."(This is a compliment btw.)
Deep thinkers and smart people like you can see a list of reasons why something's impossible.
That list can often be turned into the tasks that make it happen.
It's already being done. My brother in college has two weekly "discussion" posts where he has to write at least 500 words for each to his fellow classmates or teacher, just about random articles or book segments his teacher share. I showed him exactly how to ask it to write these response for him. You can even give it a word count to stop at and then ask to make it more natural sounding/optimistic/reflective etc. You could even literally ask it to write like Eminem and it will start rapping the response to the book segment.Imagine if students learn how to cheat in exams using ChatGPT. They might already be doing it.
That accounting part sounds like a good business idea, perhaps an accounting frontend of GPT-3?It's already being done. My brother in college has two weekly "discussion" posts where he has to write at least 500 words for each to his fellow classmates or teacher, just about random articles or book segments his teacher share. I showed him exactly how to ask it to write these response for him. You can even give it a word count to stop at and then ask to make it more natural sounding/optimistic/reflective etc. You could even literally ask it to write like Eminem and it will start rapping the response to the book segment.
Not only that but I have used it for personal accounting and budgeting.
I've copy and pasted a messy a$$ list of dozens of apartments and asked it to find the cost per square foot of each one and format it into a list, and it does it instantly
Join Fastlane Insiders.