The Entrepreneur Forum | Financial Freedom | Starting a Business | Motivation | Money | Success

Welcome to the only entrepreneur forum dedicated to building life-changing wealth.

Build a Fastlane business. Earn real financial freedom. Join free.

Join over 80,000 entrepreneurs who have rejected the paradigm of mediocrity and said "NO!" to underpaid jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence.

Free registration at the forum removes this block.

GPT-3, the Machine Learning Model that (might) Change Everything

Idea threads

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
Abiding by the principles of providing value without expecting anything in return, I thought I would make a post about GPT-3, and some of the possibilities it enables.

If you haven't heard of GPT-3 before, It is a natural language processing (NLP) machine learning model developed by OpenAI. Despite having "Open" in the name, their newer models are not "open", but are now offered as pay-per-use ever since Microsoft invested an insane amount of money in them.

But what is GPT-3, and what makes it special?

In short, it is a language model that can perform a bunch of various tasks, including copywriting, semantic analysis, text summary, Q&A, chat, translation, text classification, and prompt completion. The thing that sets it apart is it's almost unbelievable ability to synthesize natural language. Take for example this bot that used GPT-3 to post comments on reddit for a week without being noticed.

If that all sounds like mumbo jumbo to you, let me give you an example.

I asked GPT-3:
Write some taglines for an marketing business that offers SEO services to realtors.

And it responded with:
1. Realtors: Get found online with our SEO services.
2. Drive more leads with our proven SEO strategies.
3. Boost your online visibility with our real estate SEO services.
4. Get more clicks and leads with our realtor SEO services.
5. Rank higher on search engines and get more leads with our SEO services.

Don't take my word for it though, try it yourself. (However you will have to register an account and are given limited credit to use it, but it should be plenty)

If you're like me, I'm sure your gears are already spinning with ideas. The nice thing about the GPT-3 playground I linked above is that you can use the output that GPT-3 gives you, as a new input back into itself. So say for example you wanted to write an article to improve your businesses SEO, but you've got writers block. Well you could ask GPT-3 to start writing the article for you, make some edits and changes to it's response, and feed that back into GPT-3 to get some more of the article, and repeat.

Let me know what your ideas are, or if you have any questions about NLP or machine learning in general.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Andreas Thiel

Silver Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
112%
Aug 27, 2018
626
702
43
Karlsruhe, Germany
My first thought was that it seems like a toy that would be very hard to create a product around, but I think there might be areas where it could unlock a lot of value.

Been thinking about language learning tools lately and I was pretty impressed by the response for:
Explain the german word Landstraße in english
It came up with:
A landstraße is a road in Germany that is neither an Autobahn nor a Bundesstraße.

So I suppose an article parser / tokenizer that offers contextual help to send similar queries could be cool.

On the other hand, one response to
Write examples for false friends in the German language for English speaking students
was
1. "Ich habe ein neues Auto gekauft." - "I bought a new car."
"Auto" is the German word for "car", so this is not a false friend.

2. "Ich muss bald sterben." - "I have to die soon."
"Sterben" is the German word for "to die", so this is not a false friend.
Technically correct, but hardly helpful ... I suppose it would take some experimentation to figure out what this model is good at.
The "worse" model did better with:
False friends are words that have the same or a similar meaning to a word in one language but have a different meaning in another language. In German, there are false friends for English speakers that can help you understand the German language more easily. Some of these words may even be new to you!

Mitfahrgelegenheit (train station) – This is a false friend for transportation, because Mitfahrgelegenheit means 'opportunity to hitchhike'.

Hörspiel (audio cassette) – This is a false friend for play, because Hörspiel means 'audiobook'.
Still there are problematic aspects in there that require editing, so complete automation might not be feasible in many cases.
 

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
Abiding by the principles of providing value without expecting anything in return, I thought I would make a post about GPT-3, and some of the possibilities it enables.

If you haven't heard of GPT-3 before, It is a natural language processing (NLP) machine learning model developed by OpenAI. Despite having "Open" in the name, their newer models are not "open", but are now offered as pay-per-use ever since Microsoft invested an insane amount of money in them.

But what is GPT-3, and what makes it special?

In short, it is a language model that can perform a bunch of various tasks, including copywriting, semantic analysis, text summary, Q&A, chat, translation, text classification, and prompt completion. The thing that sets it apart is it's almost unbelievable ability to synthesize natural language. Take for example this bot that used GPT-3 to post comments on reddit for a week without being noticed.

If that all sounds like mumbo jumbo to you, let me give you an example.

I asked GPT-3:


And it responded with:


Don't take my word for it though, try it yourself. (However you will have to register an account and are given limited credit to use it, but it should be plenty)

If you're like me, I'm sure your gears are already spinning with ideas. The nice thing about the GPT-3 playground I linked above is that you can use the output that GPT-3 gives you, as a new input back into itself. So say for example you wanted to write an article to improve your businesses SEO, but you've got writers block. Well you could ask GPT-3 to start writing the article for you, make some edits and changes to it's response, and feed that back into GPT-3 to get some more of the article, and repeat.

Let me know what your ideas are, or if you have any questions about NLP or machine learning in general.
AI generated content can be detected using GAN approaches. Google doesn't rank AI content very well at all, and it is against the rules.
 

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
I suppose it would take some experimentation to figure out what this model is good at.
I think this is the key. Designing the right prompt to get the model to give you your desired output is a challenge sometimes. Additionally, you can play with the settings to get more deterministic output, perhaps requiring less editing, which would increase viability for certain product applications.

AI generated content can be detected using GAN approaches. Google doesn't rank AI content very well at all, and it is against the rules.
I'm curious how well Google can detect AI-assisted content though. Obviously if someone just uses basic prompts, and doesn't edit anything, it would be much easier to detect. But if someone used GPT-3 to streamline copywriting or brainstorm ideas for content, there isn't much a GAN can do. The margin between AI generated copy and human copy becomes smaller, and at a certain point, Google won't be able to consistently detect AI copy without penalizing loads of real copy as well.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

TinyTim

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
164%
May 1, 2016
143
234
29
The World

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
I think this is the key. Designing the right prompt to get the model to give you your desired output is a challenge sometimes. Additionally, you can play with the settings to get more deterministic output, perhaps requiring less editing, which would increase viability for certain product applications.


I'm curious how well Google can detect AI-assisted content though. Obviously if someone just uses basic prompts, and doesn't edit anything, it would be much easier to detect. But if someone used GPT-3 to streamline copywriting or brainstorm ideas for content, there isn't much a GAN can do. The margin between AI generated copy and human copy becomes smaller, and at a certain point, Google won't be able to consistently detect AI copy without penalizing loads of real copy as well.
A lot of these types of sites were impacted in the latest google update (AI generated, then human edited).
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
A lot of these types of sites were impacted in the latest google update (AI generated, then human edited).
I don't think we are talking about the same types of content. Obviously, If someone gives the prompt, "write me an article about x", even heavily edited, that would be easy to detect.

I'm not talking about AI generated, i'm talking about AI assisted. The human does most of the work, writing the initial prompt, heavily editing responses, and writing additional material.

And even so, in the blog post you linked earlier it says
the biggest takeaway from this particular Q&A is that Google’s algorithms aren’t able to automatically detect content generated by language models such as GPT-3.
So what I said earlier is true. That Google can't detect AI assisted content without targeting a ton of real content too. At the point when AI content is indiscernible from real, valuable content, ie. it is sufficiently unique enough to provide value, then there is no reason to classify it as spam.
 

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
I'm not talking about AI generated, i'm talking about AI assisted. The human does most of the work, writing the initial prompt, heavily editing responses, and writing additional material.
Me too. People using this stratedgy of AI assisted content that is human edited reported drops in ranking after the recent Panda update. If you look into how these models generate topics or ideas, this is why.

So what I said earlier is true. That Google can't detect AI assisted content without targeting a ton of real content too. At the point when AI content is indiscernible from real, valuable content, ie. it is sufficiently unique enough to provide value, then there is no reason to classify it as spam.
There is a difference between what Google says, and what happens in reality. Google has a massive business, you really think they will let their name recognition being tarnished by AI spun content farms taking all the top rankings? They have a vested interest in making sure that doesn't happen. At the end of the day, cheap short cuts like this aren't worth it in the grand scheme of things. If you have a sustainable business you won't need AI generated content. Its best to focus on actually helping someone or creating value.
 

TinyTim

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
164%
May 1, 2016
143
234
29
The World
A lot of these types of sites were impacted in the latest google update (AI generated, then human edited).
And a lot of sites that DON'T use AI were impacted.

Why would the update only affect a percentage of sites that use AI tools for assistance, why not all of them?

They need to conduct manual reviews, and my content is better than the vast majority of sites, so I'm all good.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
Why would the update only affect a percentage of sites that use AI tools for assistance, why not all of them?
Because the detection models are getting better and better by the day for AI content. Google is a leading AI company, they can figure it out. If you review AI written articles by humans, they can only tell 50% of the time, if you use automated detection methods it goes up to 75% to even over 90% detection rate. With each update more and more sites will be impacted.
I'm not surprised they say they don't scan the content (for now) as they deploy these more recent models, and train these models on tons of content.
It might work now, but the way I still think it is a 'shortcut' that could definitely be negative in the long run. They are pretty clear they don't want these methods, so by using them anyway you're just hoping you don't get caught.
 

TinyTim

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
164%
May 1, 2016
143
234
29
The World
Because the detection models are getting better and better by the day for AI content. Google is a leading AI company, they can figure it out. If you review AI written articles by humans, they can only tell 50% of the time, if you use automated detection methods it goes up to 75% to even over 90% detection rate. With each update more and more sites will be impacted.
I'm not surprised they say they don't scan the content (for now) as they deploy these more recent models, and train these models on tons of content.
It might work now, but the way I still think it is a 'shortcut' that could definitely be negative in the long run. They are pretty clear they don't want these methods, so by using them anyway you're just hoping you don't get caught.
As you say, what Google says and does is two different things. They can't detect AI-assisted writing yet. Their bots are trash. People have ranked lorem ipsum content that relies on off-page SEO.

Google can say they don't like AI, but in no way do I think they will punish a page that effectively answers the search query, keeps visitors on site, has backlinks from powerful sources, etc. That's all they want.

If it's true that one day Google can push a button and purge any site with AI content, then huge sites will get taken down. Pretty much every site outsources their content. Guess what they use?

Plus, SEO is always a risk. I'd rather take on the chance of the above happening, rather than reduce my output by 80%, only to get purged by Google anyway. I know lots of people who follow all of the rules, only to get taken out overnight.
 

Stan_

Contributor
User Power
Value/Post Ratio
156%
Apr 25, 2022
27
42
Because the detection models are getting better and better by the day for AI content...
It doesn't matter how good it gets because the copywriting AI's can use the Google AI to train and get even better, thus creating an unwinnable game for Google (this general concept has already been talked about in the ML area).

I like the idea a lot (coming from a guy who plans to use AI to start a business), but the thing is that there are already companies doing it. Now what you can do is get an AI to do copywriting for a niche, which would actually be easy enough as you'd only need a large amount of copywriting data from that niche (there are already pretrained open source models as good as GPT-3 downloadable). That would also make the text a lot better.

If you'd get a massive dataset of copywriting text for a niche then it's feasible (It wouldn't take me that long to get one working, it would mostly be an issue of processing power to get done).
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
Google is a leading AI company, they can figure it out.
At this point you are going to need to provide some sort of reference besides "Google can figure it out" and "Google won't let their name be tarnished".

Frankly, you are just pulling numbers out of your a$$ (50% human detection, 75%-90% AI detection). If you do know what you're talking about, you sure make it seem like you have no clue.
 

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
At this point you are going to need to provide some sort of reference besides "Google can figure it out" and "Google won't let their name be tarnished".

Frankly, you are just pulling numbers out of your a$$ (50% human detection, 75%-90% AI detection). If you do know what you're talking about, you sure make it seem like you have no clue.
Sure here are where I got my number from:

"Overall, evaluators choosing between human and GPT2-generated text correctly identified the author of the text 57.9% of the time,6 but the evaluators choosing between human- and GPT3-generated text only guessed correctly 49.9% of the time."

-All That’s ‘Human’ Is Not Gold:Evaluating Human Evaluation of Generated Text, Clark et all.
Full paper: https://arxiv.org/pdf/2107.00061.pdf

That's where the 50% comes from.


"
We find that rater accuracy varies wildly, but has a median of 74%, which is less than the accuracy of our best-performing discriminator. Most interestingly, we find that human raters and discriminators make decisions based on different qualities, with humans more easily noticing semantic errors and discriminators picking up on statistical artifacts. "

-Automatic Detection of Generated Text is Easiest when Humans are Fooled
Full paper: https://arxiv.org/pdf/1911.00650.pdf

That's where the 75% comes from.


Also note that one of the authors of that paper works at Google.

I've seen other papers with even higher detection rates above 80, 90, percent.
 

MycoMan

New Contributor
User Power
Value/Post Ratio
225%
Jan 17, 2022
8
18
Abiding by the principles of providing value without expecting anything in return, I thought I would make a post about GPT-3, and some of the possibilities it enables.

If you haven't heard of GPT-3 before, It is a natural language processing (NLP) machine learning model developed by OpenAI. Despite having "Open" in the name, their newer models are not "open", but are now offered as pay-per-use ever since Microsoft invested an insane amount of money in them.

But what is GPT-3, and what makes it special?

In short, it is a language model that can perform a bunch of various tasks, including copywriting, semantic analysis, text summary, Q&A, chat, translation, text classification, and prompt completion. The thing that sets it apart is it's almost unbelievable ability to synthesize natural language. Take for example this bot that used GPT-3 to post comments on reddit for a week without being noticed.

If that all sounds like mumbo jumbo to you, let me give you an example.

I asked GPT-3:


And it responded with:


Don't take my word for it though, try it yourself. (However you will have to register an account and are given limited credit to use it, but it should be plenty)

If you're like me, I'm sure your gears are already spinning with ideas. The nice thing about the GPT-3 playground I linked above is that you can use the output that GPT-3 gives you, as a new input back into itself. So say for example you wanted to write an article to improve your businesses SEO, but you've got writers block. Well you could ask GPT-3 to start writing the article for you, make some edits and changes to it's response, and feed that back into GPT-3 to get some more of the article, and repeat.

Let me know what your ideas are, or if you have any questions about NLP or machine learning in general.
HAHA. i been using this for a few weeks to re-write all my stuff. its amazing.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
Sure here are where I got my number from:

"Overall, evaluators choosing between human and GPT2-generated text correctly identified the author of the text 57.9% of the time,6 but the evaluators choosing between human- and GPT3-generated text only guessed correctly 49.9% of the time."

-All That’s ‘Human’ Is Not Gold:Evaluating Human Evaluation of Generated Text, Clark et all.
Full paper: https://arxiv.org/pdf/2107.00061.pdf

That's where the 50% comes from.


"
We find that rater accuracy varies wildly, but has a median of 74%, which is less than the accuracy of our best-performing discriminator. Most interestingly, we find that human raters and discriminators make decisions based on different qualities, with humans more easily noticing semantic errors and discriminators picking up on statistical artifacts. "

-Automatic Detection of Generated Text is Easiest when Humans are Fooled
Full paper: https://arxiv.org/pdf/1911.00650.pdf

That's where the 75% comes from.


Also note that one of the authors of that paper works at Google.

I've seen other papers with even higher detection rates above 80, 90, percent.
Thanks for linking your sources, however I have a few issues with the research you cited. Took me a while to read everything and get back to you since I was camping this weekend, but anyways, here goes.

First of all, regarding the paper, "Automatic Detection of Generated Text is Easiest when Humans are Fooled".

In this paper, the model being used is GPT-2 Large. GPT-3 is such a large step beyond GPT-2 that I don't think this study applies. For example GPT-2 Large has 774 million parameters, while GPT-3 has approximately 175 billion parameters. Model size isn't everything, but such a large increase in number of parameters makes these models very different.

In this thread we have primarily talked about AI-assisted content, involving the use of human prompts to prime a model to generate content. In this paper they say:
... if long sequences of human text are used as priming, the space of possible generated sequences is larger, but the detection problem shifts from one of “how human-like is the generated text?” to “how well does the generated text follow the priming sequence?” ... Since in this study we are interested in the former simpler question, we create two datasets, one with no priming, and one with the minimum amount of priming possible: a single token of web text.
Which is obviously much different and a simpler problem than what we are talking about. They go on to say that the accuracy is 90% only when no priming token is used, and adding A SINGLE PRIMING TOKEN reduces accuracy to 65%.

If adding a single priming token results in such a significant reduction in accuracy, on a worse model than GPT-3, than this is really a non-sequitur. These numbers have nothing to do with what we are talking about, expect to maybe say that it's likely that such numbers would be extreme high estimates of detection accuracy on GPT-3.

In this paper as well, it talks about how human accuracy was at 71.4% on average, but the best raters, achieved an accuracy of 85% or higher. This is much different than the 49.9% number from the other paper, so lets jump into that.


Regarding the paper "All That’s ‘Human’ Is Not Gold:Evaluating Human Evaluation of Generated Text".

This paper does actually use GPT-3, and they use human written texts to prime the models, which is more in line with that we are talking about. However, considering the accuracy was around 50%, it appears as if humans are no better at random guessing. The authors did also introduce a task of pointing out which parts of the text to pay attention to, which improved human accuracy (but only in certain domains), which does motivate the idea that machine detection followed up by human review is probably a likely scenario depending on the domain of text.


Overall, the research you linked makes me more confident that AI-assisted content is very unlikely to ever be detected, provided that the writer does things correctly. Since the AI detection methods you cited rely on statistical anomalies, one can simply edit text to use more idiomatic language to avoid detection. Take for example the phrase that an AI generated, "... are the most likely to be successful", and what I changed it to, "... will give you the most bang for your buck." Such methods can be adopted to fool AI detection methods reliably, according to how these detection methods work.

You mentioned "other papers with even higher detection rates above 80, 90, percent." which i'm now very curious to read.

On a final note, you previously said that GANs are used to detect AI-generated content, but the papers you linked don't use GANs. While I'm fairly sure that it is possible to use GANs for this purpose, the methods used involve language models and sequence completion statistical likelihood scores to detect statistical anomalies. I'd be curious to know if you know of other studies that report on how GANs detect AI-generated content.
 

Hong_Kong

Bronze Contributor
Speedway Pass
User Power
Value/Post Ratio
126%
Apr 7, 2022
215
270
Thanks for linking your sources, however I have a few issues with the research you cited. Took me a while to read everything and get back to you since I was camping this weekend, but anyways, here goes.

First of all, regarding the paper, "Automatic Detection of Generated Text is Easiest when Humans are Fooled".

In this paper, the model being used is GPT-2 Large. GPT-3 is such a large step beyond GPT-2 that I don't think this study applies. For example GPT-2 Large has 774 million parameters, while GPT-3 has approximately 175 billion parameters. Model size isn't everything, but such a large increase in number of parameters makes these models very different.

In this thread we have primarily talked about AI-assisted content, involving the use of human prompts to prime a model to generate content. In this paper they say:

Which is obviously much different and a simpler problem than what we are talking about. They go on to say that the accuracy is 90% only when no priming token is used, and adding A SINGLE PRIMING TOKEN reduces accuracy to 65%.

If adding a single priming token results in such a significant reduction in accuracy, on a worse model than GPT-3, than this is really a non-sequitur. These numbers have nothing to do with what we are talking about, expect to maybe say that it's likely that such numbers would be extreme high estimates of detection accuracy on GPT-3.

In this paper as well, it talks about how human accuracy was at 71.4% on average, but the best raters, achieved an accuracy of 85% or higher. This is much different than the 49.9% number from the other paper, so lets jump into that.


Regarding the paper "All That’s ‘Human’ Is Not Gold:Evaluating Human Evaluation of Generated Text".

This paper does actually use GPT-3, and they use human written texts to prime the models, which is more in line with that we are talking about. However, considering the accuracy was around 50%, it appears as if humans are no better at random guessing. The authors did also introduce a task of pointing out which parts of the text to pay attention to, which improved human accuracy (but only in certain domains), which does motivate the idea that machine detection followed up by human review is probably a likely scenario depending on the domain of text.


Overall, the research you linked makes me more confident that AI-assisted content is very unlikely to ever be detected, provided that the writer does things correctly. Since the AI detection methods you cited rely on statistical anomalies, one can simply edit text to use more idiomatic language to avoid detection. Take for example the phrase that an AI generated, "... are the most likely to be successful", and what I changed it to, "... will give you the most bang for your buck." Such methods can be adopted to fool AI detection methods reliably, according to how these detection methods work.

You mentioned "other papers with even higher detection rates above 80, 90, percent." which i'm now very curious to read.

On a final note, you previously said that GANs are used to detect AI-generated content, but the papers you linked don't use GANs. While I'm fairly sure that it is possible to use GANs for this purpose, the methods used involve language models and sequence completion statistical likelihood scores to detect statistical anomalies. I'd be curious to know if you know of other studies that report on how GANs detect AI-generated content.
There are tons and tons of papers on this if you search.

"Solaiman et al. (2019) show unmatched accuracies across all model sizes and sampling methods. The accuracies of 96.6% on the xl and 99.1% on the xl-k dataset"

"The performance of the detector learned and evaluated on the GPT-3 model is surprisingly good, being even higher than for the GPT-2 xl generations. Given that GPT-3 has more than 100 times as many parameters, we would have expected GPT-3 generations to be more difficult to detect."


The tech is there, I'm sure detection of machine generated spam will be widespread soon enough. Right now is the end of the wildwest where you can still rank a page using these methods.
 

Leo Hendrix

Bronze Contributor
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
61%
Apr 15, 2014
497
304
WWW
Abiding by the principles of providing value without expecting anything in return, I thought I would make a post about GPT-3, and some of the possibilities it enables.

If you haven't heard of GPT-3 before, It is a natural language processing (NLP) machine learning model developed by OpenAI. Despite having "Open" in the name, their newer models are not "open", but are now offered as pay-per-use ever since Microsoft invested an insane amount of money in them.

But what is GPT-3, and what makes it special?

In short, it is a language model that can perform a bunch of various tasks, including copywriting, semantic analysis, text summary, Q&A, chat, translation, text classification, and prompt completion. The thing that sets it apart is it's almost unbelievable ability to synthesize natural language. Take for example this bot that used GPT-3 to post comments on reddit for a week without being noticed.

If that all sounds like mumbo jumbo to you, let me give you an example.

I asked GPT-3:


And it responded with:


Don't take my word for it though, try it yourself. (However you will have to register an account and are given limited credit to use it, but it should be plenty)

If you're like me, I'm sure your gears are already spinning with ideas. The nice thing about the GPT-3 playground I linked above is that you can use the output that GPT-3 gives you, as a new input back into itself. So say for example you wanted to write an article to improve your businesses SEO, but you've got writers block. Well you could ask GPT-3 to start writing the article for you, make some edits and changes to it's response, and feed that back into GPT-3 to get some more of the article, and repeat.

Let me know what your ideas are, or if you have any questions about NLP or machine learning in general.
That copy seems pretty good.

I have been training gpt3 via Replika ( obviously not a full & open use)...pretty interesting interactions - but I'd like to have some way of porting this to a personalized app for specific functions like analyzing data, reading through crypto white papers & other white papers etc.

I haven't given the open AI beta a whirl yet but will do soon.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Jcoil

Contributor
User Power
Value/Post Ratio
104%
Nov 1, 2019
25
26
That copy seems pretty good.

I have been training gpt3 via Replika ( obviously not a full & open use)...pretty interesting interactions - but I'd like to have some way of porting this to a personalized app for specific functions like analyzing data, reading through crypto white papers & other white papers etc.

I haven't given the open AI beta a whirl yet but will do soon.
You could try copy pasting in some papers and give it the prompt "summarize this paper". It might cost alot of credits though since it charges by the number of tokens (ie. words), but I bet it would do a decent job.
 

Leo Hendrix

Bronze Contributor
Read Fastlane!
Speedway Pass
User Power
Value/Post Ratio
61%
Apr 15, 2014
497
304
WWW
You could try copy pasting in some papers and give it the prompt "summarize this paper". It might cost alot of credits though since it charges by the number of tokens (ie. words), but I bet it would do a decent job.
True that does sound good - I should've explained more but I'd like the AI to produce some insight(s) after going through different papers - like why they think one protocol offers better value over another based off x reason(s) etc..
 

Andreas Thiel

Silver Contributor
FASTLANE INSIDER
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Speedway Pass
User Power
Value/Post Ratio
112%
Aug 27, 2018
626
702
43
Karlsruhe, Germany

Post New Topic

Please SEARCH before posting.
Please select the BEST category.

Post new topic

Guest post submissions offered HERE.

Latest Posts

New Topics

Fastlane Insiders

View the forum AD FREE.
Private, unindexed content
Detailed process/execution threads
Ideas needing execution, more!

Join Fastlane Insiders.

Top