The Entrepreneur Forum | Financial Freedom | Starting a Business | Motivation | Money | Success

The Entrepreneur's Forum for learning how to build wealth and financial freedom the Fastlane way!

Say "NO" to mediocre living rife with jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence. Join our forum with more than 70,000 entrepreneurs who are making it happen.
Join for FREE Today
Get the books
Remove ads? Join Fastlane INSIDERS
(Registration removes this block)

"Artificial Intelligence" is the biggest scam going.

loop101

Gold Contributor
Read Fastlane!
Read Unscripted!
Speedway Pass
Mar 3, 2013
1,256
2,066
I find all this "AI" hoopla to be surreal. The only thing really different between now and the late 1980s is that there are more things being tracked, and processors capable of parallel computation are much more common. I don't see where all this "AI" crap is coming from, but I expect it to not end well. Another one of these:

AI winter - Wikipedia

"In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later."

IMHO, in the world of AI there are basically three eras: 1) Perceptron Era, 2) Expert System Era, 3) Multi-Layer Perceptron Era ("Neural Network").

Perceptrons were connectionist systems modeled on the human eye, and could be trained to solve simple problems - but not complex problems (XOR problems). They could tell you if A was true, if B was true, if A or B was true, but NOT if A was true and B was not true.

Two a-holes named Minsky and Papert wrote a book in 1969 called "Perceptrons" that exposed their weakness with XOR problems, and all the government's R&D money (for the next 20 years) went to the a-holes who were pushing rule-based expert systems. Perceptrons were LEARNING systems, while expert systems vomited out expertise CODED by a subject-matter expert.

In 1986, a really cool guy named Rumelheart (and his friends) developed a multi-layer Perceptron that COULD solve the XOR problem, and they provided an algorithm ("back-prop") that allowed the system to learn properly. This freed all the AI users from hard-coding expertise in to programs, and switch to self-learning systems. A year after this paper came out, Minksy and Papert were booed at an AI conference.

After 1986 there were other important developments, but they have been mostly improvements to the idea of the MLP. The MLP training algorithm uses some Calculus formulas, which where a pain. Genetic Algorithms were invented to allow for the dead-simple calculation/evolution of the neural weights (connection strengths). Other natural-intelligence inspired algorithms came out, Swarm Theory, Ant Colony Optimization, Simulated Annealing, etc. I group all these in the "MLP Era".

How do you know a system can be trusted? Anyone who has dealt with self-learning trading systems is familiar with this. It turns out, obviously enough, the more samples of your "AI"'s output, the more certainty you can have that it is modeling reality. if you make a system that says someone is going to have a heart-attack within 30 days, and it is right twice, that is one thing. If it is right 200 times and wrong 2 times, that is another thing. To have a high-degree of certainty, you have to have a lot of unique test cases. This is what the world lacks.

We track a lot of stuff, we have WIDE data, but we don't have DEEP data. If two test cases happened at the same TIME, even partially, they are generally not unique. How long has Amazon been tracking customer purchases? When they predict a sale, how far back in the customer's habits do they look? If Amazon looks at your last 2 years of shopping data, how many unique tests will they have if the user joined Amazon 10 years ago? 5 samples. How certain can you be of something with 5 samples? It turns out, not very sure. You need hundreds of samples to train the system, and hundreds of tests to see how well it learned. This DEEP data does not exist, and wont, for a very long time. Of course you can a train a system on many users, but they are still not unique. Maybe a war breaks out and everyone stops shopping. Or a bird virus scare creates a run on surgical masks. Maybe they buy toys, but only before Christmas.

So the learning algorithms have not really changed, and the only data is SHALLOW data. There is nothing there. It's fantasy, but I guess there is going to be a lot of money thrown at this fantasy.

I'm not saying you can't beat RANDOM, or you can't beat someone else, but I don't see any justification for the recent hype around AI. If anything, the Netflix Prize is one of the better real-world examples of AI.

Netflix Prize - Wikipedia
 
Don't like ads? Remove them while supporting the forum: Subscribe to Fastlane Insiders.
Last edited:

Post New Topic

Please SEARCH before posting.
Please select the BEST category.

Post new topic

Guest post submissions offered HERE.

New Topics

Fastlane Insiders

View the forum AD FREE.
Private, unindexed content
Detailed process/execution threads
Ideas needing execution, more!

Join Fastlane Insiders.

Must Read Books...

Explore books recommended by MJ DeMarco and other members of the Fastlane entrepreneurial community.
Fastlane Bookstore
Top