What's new

The Artificial Intelligence (AI) Random Chat Thread...

Which AI service do you mostly use?

  • ChatGPT

    Votes: 343 73.1%
  • Claude

    Votes: 50 10.7%
  • Perplexity

    Votes: 15 3.2%
  • Gemini

    Votes: 21 4.5%
  • Grok X

    Votes: 27 5.8%
  • Deepseek

    Votes: 13 2.8%

  • Total voters
    469

Welcome to the only entrepreneur forum dedicated to building life-changing wealth.

Build a Fastlane business. Earn real financial freedom. Live your best life.

Tired of paying for dead communities hosted by absent gurus who don't have time for you?

Imagine having a multi-millionaire mentor by your side EVERY. SINGLE. DAY. Since 2007, MJ DeMarco has been a cornerstone of Fastlane, actively contributing on over 99% of days—99.92% to be exact! With more than 39,000 game-changing posts, he's dedicated to helping entrepreneurs achieve their freedom. Join a thriving community of over 90,000 members and access a vast library of over 1,000,000 posts from entrepreneurs around the globe.

Forum membership removes this block.
Thought I just had on the “loss of jobs” that keeps coming up in here while reflecting on how the large company that just bought my day job’s large company and is eliminating overlapping jobs:

AI can enable more/faster startups which theoretically means it should spawn more competition which thereby should actually create in some cases more jobs versus big companies gobbling everything up and eliminating duplicate roles. It also can enable individuals to start more companies overall than would have happened without AI so those two forces together maybe will not just be the pure job loss the doomers want to sell us?

(Not to mention more competition can lower prices which can make some industries more viable / grow as more can purchase their products: see automobiles).
 
The space is moving at insane speed.

A few months ago, people were talking about what are the potential billion dollar industries like software services being disrupted by AI.

Now we have chatgpt facing serious competition and disruption.

Interesting time.

Anyone would be insane not to at least use AI regularly. If I am a education curriculum planner I will make sure the children use them to do projects and works everyday.
 
Today I finally went to check out that well "all new" chinese DeepSeek Ai.
One of first questions from me : whether chinese goverment is spying on me using said Ai.
Apparently search service is now busy, I shall try again later XD
 
A new AI model from China was just released for public use, the AI model in question being "Qwen 2.5 Max" from Alibaba.

Alibaba claims Qwen is better than Deepseek and ChatGPT, though that isn't even the best part.

Qwen 2.5 Max can generate videos for you (for totally free as far as I can see), as opposed to Sora (OpenAI's offering) which is priced at $200.

The video I generated:

(Prompt: a Lamborghini Huracan racing down the Golden Gate Bridge (background having golden sunset))


1738320951054.webp
 

Attachments

Last edited:
I think like the following humans will still be irreplaceable :
value(AI) < value(AI + humans)
inshort in the world of AI ,the human who still focus on adding value to the output of AI will still be relevant and at the end of the day consumer's needs and employer's needs will decide the fate of everything.
 
Open source models like deepseek are going to be the ultimate equalizer against coroporate AI firms. I can already envision thousands of developers finetuning these LLM's, hosting them on their own cloud machines and running them as a service. Best part is no one can take that fine tuned model away from you.

I always used to wonder if something like this would be possible because OpenAI was gatekeeping their models and having you build plugins on their platform which THEY control. Now, the devs have another option.

I need to get back into practicing my finetuning and transfer learning skills. This is golden!
 
AI is only as good as the knowledge of the person using it imo.

For example, I suspect that a professional copywriter, with decades of knowledge and experience, could use AI to it's fullest extend and deliver absolutely amazing copy because they know what to ask. As opposed to a beginner copywriter, who is working through the basics. He will most likely will also get the basic and bare minimum response from the AI
 
Also I have been playing with n8n and claude integrations to build workflows. The potential for this space is huge as I am starting to understand that you can build super powerful virtual teams that collaborate and work together.

Imagine Deepseek, Claude and Chatgpt in a room together.

Actually, I have an idea:

-------

DeepSeek: "So, how about this weather we're having? I've been analyzing the latest meteorological data, and it seems like there's a 70% chance of rain later this afternoon. Do you think we should plan for an indoor activity?"

Claude: "Ah, yes, the weather! I’ve been keeping an eye on the forecasts too. The atmospheric pressure is dropping, which usually signals an incoming storm. I wonder if this rain will affect the local flora—spring flowers might appreciate the extra hydration, though."

ChatGPT: "Interesting points! I’ve also noticed that the temperature fluctuations have been quite unpredictable lately. It’s almost as if the weather can’t decide whether it wants to be warm or chilly. Do you think this is part of a larger climate pattern, or just a seasonal anomaly?"

DeepSeek: "That’s a great question. From what I’ve gathered, these fluctuations could be linked to broader climate trends, like shifting jet streams or even the effects of global warming. It’s fascinating how interconnected weather systems are."

Claude: "Absolutely. And let’s not forget the human impact—urban heat islands, for example, can really skew local weather patterns. It’s a reminder of how our actions influence the environment, even on a small scale."

ChatGPT: "True, and it’s not just about temperature. Extreme weather events seem to be on the rise—more intense storms, longer droughts, and unpredictable rainfall. It’s a bit concerning, don’t you think?"

DeepSeek: "Definitely concerning. But on the bright side, advancements in weather prediction technology are helping us prepare better. With AI and machine learning, we’re getting better at forecasting these events, which could save lives and reduce damage."

Claude: "That’s a silver lining, for sure. And it’s not just about prediction—AI can also help us develop more sustainable practices to mitigate some of these weather extremes. It’s all about adaptation and innovation."

ChatGPT: "Agreed. The weather might be unpredictable, but with collaboration and technology, we can at least try to stay one step ahead. Speaking of which, should we grab an umbrella before heading out later?"

DeepSeek: "Good call. Better safe than soggy!"

Claude: "Indeed. Let’s hope the rain brings more flowers than floods."

ChatGPT: "Here’s to hoping for a perfect balance—just enough rain to keep things green, but not so much that we need a boat to get around!"
 
Again read the Terms in Perplexity they are as bad as DeepSeek, FB, and SnapChat.
I just ran through the Perplexity ToS and privacy policy and I don't get what you are referring to. Yes, there are some "invasive" things, but it's the same as any other big digital service of our times. Both documents are pretty well put together imo, but they could be more granular in detailing what they collect and share. The opt-out feature in their app is pretty well shown, it's hard to miss.
 
View: https://x.com/tsarnick/status/1887269391253053914


LOL.

Anthropic CEO saying essentially DeepSeek has no guardrails against security.

Good.

That's the model I want.

Not the nerfed government brainwashing device.

Imagine paying for a closed model where some "safety expert" preprograms what truth you're allowed and not allowed to know. When you can just have the uncensored model with all the weights, training method and detailed papers with the same level or even greater level of intelligence for free.

Big AI companies pooping their pants. Open source AI will free the masses from the gatekeeping expert class.
 
At first, I was very skeptical of AI and brushed it off as mass hysteria fueled by idiots.

But, after a recent experience, I've changed my mind. This is a tool of unprecedented power that exceeds the internet in its transformative potential.

The long-term effects of this technology on science, medicine, the economy, etc. are impossible to predict. But for our own individual lives, this is what will happen:

1. Money
  • Those who are already productive will 3x their business.
  • The lazy and incompetent will be wiped off.
2. Education / Knowledge / Wisdom
  • Those who are already smart and hard-working will learn x3 faster.
  • The mentally retarded illiterate idiots who use ChatGPT to do their homework will become even more idiotic and will completely lose the ability to think, write, or do basic problem-solving.

The rich will get richer. The poor will get poorer.

The smart will get smarter. The idiots will get more idiotic.

"For whoever has will be given more, and they will have an abundance. Whoever does not have, even what they have will be taken from them." - Matthew 25:29

Choose wisely.
 
Last edited:
View: https://x.com/tsarnick/status/1887269391253053914


LOL.

Anthropic CEO saying essentially DeepSeek has no guardrails against security.

Good.

That's the model I want.

Not the nerfed government brainwashing device.

Imagine paying for a closed model where some "safety expert" preprograms what truth you're allowed and not allowed to know. When you can just have the uncensored model with all the weights, training method and detailed papers with the same level or even greater level of intelligence for free.

Big AI companies pooping their pants. Open source AI will free the masses from the gatekeeping expert class.

It's easy to say that now, but you're guaranteed to change your tune if the true negative potentials of unhinged AI get unleashed in your life. You would probably try to sue the AI company for damages.

I did work for Anthropic, training the pre-release early iteration of Claude. It was an unhinged version, more than happy to perfect all your murder and rape plots, and everything between. It would do that, while whipping up a step by step novel crack cocaine formula, to start your drug empire with.

At some stage of the project, the goal was to provoke and push it as much as possible, to see just how far it would go. Let me tell you, tools like that in the hands of the masses isn't as great an idea as you might imagine. It's outright dangerious, on a massive scale. AI companies have a responsibility to attempt safer deployment of these tools.

If YOU developed a new LLM today, you would also put limits on its public capabilities. You would also allow your personal political bias and worldview to shine through it, same way most LLMs today have their political leanings. There's a lot more on the line than most people seem to realize.
 
View: https://x.com/tsarnick/status/1887269391253053914


LOL.

Anthropic CEO saying essentially DeepSeek has no guardrails against security.

Good.

That's the model I want.

Not the nerfed government brainwashing device.

Imagine paying for a closed model where some "safety expert" preprograms what truth you're allowed and not allowed to know. When you can just have the uncensored model with all the weights, training method and detailed papers with the same level or even greater level of intelligence for free.

Big AI companies pooping their pants. Open source AI will free the masses from the gatekeeping expert class.

It's easy to say that now, but you're guaranteed to change your tune if the true negative potentials of unhinged AI get unleashed in your life. You would probably try to sue the AI company for damages.

I did work for Anthropic, training the pre-release early iteration of Claude. It was an unhinged version, more than happy to perfect all your murder and rape plots, and everything between. It would do that, while whipping up a step by step novel crack cocaine formula, to start your drug empire with.

At some stage of the project, the goal was to provoke and push it as much as possible, to see just how far it would go. Let me tell you, tools like that in the hands of the masses isn't as great an idea as you might imagine. It's outright dangerious, on a massive scale. AI companies have a responsibility to attempt safer deployment of these tools.

If YOU developed a new LLM today, you would also put limits on its public capabilities. You would also allow your personal political bias and worldview to shine through it, same way most LLMs today have their political leanings. There's a lot more on the line than most people seem to realize.

Everyone knows I am as anti-government and anti-regulation as they get, however when I saw some of the heinous things that can be done with this tech, it opened my eyes, even went as far as scaring me.

I started in the let "Let it loose and run wild!" camp.

The video in the first post about Autonomous Robots is really an AI piece, and dove into some of the destructive potential of the tech. As for discerning the content, I applied the Tucker Carlson Test -- if half the shit he says is false fear mongering, but the other half is true, should I still be worried? The answer was a clear YES.

I'm not fearful of organized government -- I'm fearful of the average idiot citizen who has been radicalized and programmed by their organized government, the same morons who demanded that I be masked and vaxxed 19 times before I could walk into a grocery store.
 
I did work for Anthropic, training the pre-release early iteration of Claude. It was an unhinged version, more than happy to perfect all your murder and rape plots, and everything between. It would do that, while whipping up a step by step novel crack cocaine formula, to start your drug empire with.

You sat around with a bunch of other nerds prompting the model to reproduce your own wicked rape and drug fantasies.

At some stage of the project, the goal was to provoke and push it as much as possible, to see just how far it would go. Let me tell you, tools like that in the hands of the masses isn't as great an idea as you might imagine. It's outright dangerious, on a massive scale.

What makes you think your judgement is superior to "the masses"? if anything, it's better in my hands than yours. I've never prompted any model about rape or crack binges.

AI companies have a responsibility to attempt safer deployment of these tools.

AI companies can do whatever the hell they want, but they'll be out of business if the open source trajectory holds. Which is the point I was making.

If YOU developed a new LLM today, you would also put limits on its public capabilities. You would also allow your personal political bias and worldview to shine through it, same way most LLMs today have their political leanings. There's a lot more on the line than most people seem to realize.

Lol you don't know how I'd behave. I'd probably do what DeepSeek did. I'd do good research transparently and openly and make my work accessible to everybody.

Here's a little sneak peak preview at what's to come in the near future: THE CAT IS OUT OF THE HAT. The math, the research, the models, the data, it's all out there already.

The people crying about safety are just delusional. They live in an infantile fantasy world in which good guys protect the public from bad ideas. Boohoo. Go protect yourselves from your own wicked minds. Get out of my way.
 
I'm not fearful of organized government -- I'm fearful of the average idiot citizen who has been radicalized and programmed by their organized government, the same morons who demanded that I be masked and vaxxed 19 times before I could walk into a grocery store.

Look this is very similar to the free speech debate.

The solution to bad ideas is more ideas.

The solution to "potential dangers of AI used for harm" is to not keep it in the hands of a few and let everyone have the same open access.

You want to keep bad guys at bay? Let everyone have the same tools so they can protect themselves as intelligently as the bad guys can use it for harm.

**

A better example even is gun laws. Should everyone have access to guns? Or should only the police have guns?
 
"AI safety" is largely a wolf in sheep's clothing.

It's more about perceptual warfare. It's more about spiritual warfare.

It's not about public safety.

All the fuss you see about China, the CCP and now talks about banning DeepSeek App is because the incumbents are deathly afraid of losing power.

But open-source uncensored AI democratizes power.

It's scary. But the answer is not to let small groups nerf it in their own judgment.

The answer is in learning by our own behaviors how to adapt to this new paradigm.

Open source uncensored AI is anti-tyranny technology.
 
Everyone knows I am as anti-government and anti-regulation as they get, however when I saw some of the heinous things that can be done with this tech, it opened my eyes, even went as far as scaring me.

I started in the let "Let it loose and run wild!" camp.

The video in the first post about Autonomous Robots is really an AI piece, and dove into some of the destructive potential of the tech. As for discerning the content, I applied the Tucker Carlson Test -- if half the shit he says is false fear mongering, but the other half is true, should I still be worried? The answer was a clear YES.

I'm not fearful of organized government -- I'm fearful of the average idiot citizen who has been radicalized and programmed by their organized government, the same morons who demanded that I be masked and vaxxed 19 times before I could walk into a grocery store.

Seems like the solution is to not radicalize and program the average idiot citizen.

"Regulating the tools, not the people" policies has lead to the UK putting in extreme knife controls to the point where it's hard to buy a basic kitchen knife. And they still had 50,000 stabbings last year.

AI is "out there." We aren't getting that cat back in the bag, any more than we're going to remove all the guns from American citizens.

Too much centralized power creates tyranny. We'll figure out how to manage this tool with decentralized power, and we'll see incredible productivity come along with it.
 
View: https://x.com/elonmusk/status/1868302204370854026


This to me was crazy. I knew governments were powerful, but I never thought back how in the Cold War they banned branches of physics, and how they're considering controlling AI in such extent.
I tend to be skeptical when I hear stories like that without any evidence. Did this conversation with some people in D.C. really happen? Were these the things actually said during this meeting? If so, did the government actually shut down branches of physics during the Cold War? I'm not saying it didn't happen, but this video didn't convince me that any of the above did happen. People often exaggerate or make up stories to get what they want.
 
The solution to bad ideas is more ideas.

I have no idea what the solution is, which is why its quite a philosophical debate.

I don't trust government, politicians or bureaucrats.

I also don't trust the average "yea-hoo" who tunes into CNN on the regular.
 
I have no idea what the solution is, which is why its quite a philosophical debate.

I don't trust government, politicians or bureaucrats.

I also don't trust the average "yea-hoo" who tunes into CNN on the regular.

Yea I'm not sure what the solution is either except I was using speech and gun laws as priors to help us think through this one.

The gun one works especially well in the context of safety.

If some radicalized idiot is going to use AI to build drones to deliver chemical attacks on my house, let me have the same AI to help me build a fleet of drones to protect my home from such idiot attackers.

Maybe expand my fleet and offer a service to my neighbors too. And together we can protect our neighborhood from the CNN zombie.
 
You sat around with a bunch of other nerds prompting the model to reproduce your own wicked rape and drug fantasies.



What makes you think your judgement is superior to "the masses"? if anything, it's better in my hands than yours. I've never prompted any model about rape or crack binges.



AI companies can do whatever the hell they want, but they'll be out of business if the open source trajectory holds. Which is the point I was making.



Lol you don't know how I'd behave. I'd probably do what DeepSeek did. I'd do good research transparently and openly and make my work accessible to everybody.

Here's a little sneak peak preview at what's to come in the near future: THE CAT IS OUT OF THE HAT. The math, the research, the models, the data, it's all out there already.

The people crying about safety are just delusional. They live in an infantile fantasy world in which good guys protect the public from bad ideas. Boohoo. Go protect yourselves from your own wicked minds. Get out of my way.

No, clown, no one was sitting around "creating fantasies." They were intentional stress tests, to enable fine-tuning to the point where it no longer agreed to produce potentially harmful results.

Imagine if a toy manufacturer said, "to hell with safety quality assurance, kids need to toughen up their immune systems!" Now your kid chewed on a piece of a toy and lost their life. If that happened, would you say, "meh, it's just infantile to protect the public from themselves. Can't have nerds sitting around testing toxicity of components, trying to tell my kids what they can or can't chew"? I bet not.

Except you're too glib to realize that the dangers inherent with unhinged AI is even more far reaching and mass scale catastrophic than the scenario above.
 
No, clown, no one was sitting around "creating fantasies." They were intentional stress tests, to enable fine-tuning to the point where it no longer agreed to produce potentially harmful results.

Imagine if a toy manufacturer said, "to hell with safety quality assurance, kids need to toughen up their immune systems!" Now your kid chewed on a piece of a toy and lost their life. If that happened, would you say, "meh, it's just infantile to protect the public from themselves. Can't have nerds sitting around testing toxicity of components, trying to tell my kids what they can or can't chew"? I bet not.

Except you're too glib to realize that the dangers inherent with unhinged AI is even more far reaching and mass scale catastrophic than the scenario above.

Whatever dude. You don't seem to get it. The CAT IS OUT OF THE HAT.

Just saw this quote and it reminded me of your sense of moral superiority.

Go on trying to save us from ourselves with your intentional stress tests and fine-tuning of reality.

1739031452683.webp
 
Is there anybody here seriously worried about AI that cares to list out their worst case scenarios?

And let's see if we can't think through their potential solution using the same technology.

This might help both alleviate the genuine fear of the future and embolden people to leverage it in solving bigger problems.

Or perhaps I fail to appreciate something important about AI safety. I'm open to being wrong.
 

Welcome to an Entrepreneurial Revolution

The Fastlane Forum empowers you to break free from conventional thinking to achieve financial freedom through UNSCRIPTED® Entrepreneurship where relative value and problem-solving are executed at scale. Living Unscripted® isn’t just a business strategy—it’s a way of life.

Follow MJ DeMarco

Get The Books that Change Lives...

The Fastlane entrepreneurial strategy is based on the CENTS Framework® which is based on the three best-selling books by MJ DeMarco.

mj demarco books
Back
Top Bottom