Ya'll realize we are the LAST generation of humans on planet earth

Do you believe in the coming technological singularity?


  • Total voters
    53
Joined
May 16, 2012
Messages
39,600
Reputation
-17,856
Daps
84,289
Reppin
NULL
Threads implies that there will be one "Super AI" that does EVERYTHING.
If you look at AI today all of these examples or even huge projects like Watson are written
by individuals to create an environment where the computer excels at specific parameters.

In 10 years things could change drastically but at the moment I don't see some "Super AI"
displacing people by the millions. :yeshrug:
From where I stand and from what I gather AI is designed to tackle specific problems.
If anything it's like smarter software in my view, there's no need to create Smart Autonomous AI
that does any and everything and I don't see it happening in my lifetime. Unless some brilliant
individuals write one piece of software which can learn any and everything ever created and push
beyond that, but that's a tall order in my view.

Again, I won't say it's totally impossible but we're at a point in human history where moore's law is breaking
down and faster hardware is getting harder to produce because we're pushing silicon to it's absolute limit.

You bring up interesting points but they have all been answered before.

As you said, all we have right now are AI's that only good at a limited set of tasks that they have been specifically designed for. The holy grail of artificial intelligence research is "Strong AI". This is an artificial intelligence that can learn how to perform any task a human can perform and be super human at it. Yes right now we don't have Strong AI or General Artificial Intelligence which you allude to as "Super AI". That is technically the holy grail of artificial intelligence research. So the fact it hasn't been reached yet doesn't mean its impossible, it just means we haven't reached the end point yet. Strong AI is basically only a few steps from the singularity. What we are in right now is the build up to Strong AI.

With regard to Moore's law, people have been predicting its demise for decades now. And its still going. Now it is true its slowing down and we are close to the end. But shrinking transistors isn't the only way to continue to double computing power. The computer researchers have been preparing for the end of Moore's law and have devised new ways to continue to double computing power. The future will be 3-D computer chips. Read about in this article: Transistors will stop shrinking in 2021, but Moore’s law will live on
 

GoddamnyamanProf

Countdown to Armageddon
Joined
Apr 30, 2012
Messages
35,793
Reputation
884
Daps
106,210
Yeah artificial intelligence will always get better over time but it will never be smarter than its creator

Ponder on this question "How can something be smarter than me and I created it"

It would be like humans saying their smarter than God and he created us.
You aren't thinking logically.

Think of a genius. Is he/she smarter than their parents, the beings that created him/her? Almost always the answer is yes.

It's not even difficult to imagine how this would apply to A.I either since we've already made computers for years that beat human thinking in basic computations and specific functions. Think of a calculator. Punch any elaborate equation into it and it comes up with the answer instantly, exponentially faster than any human can figure out the same calculations. Now picture people creating an A.I. system in which those same principles of that calculator function are applied more broadly to other all-around systems of thinking. Once the sentience is achieved and factored in, its pretty much over from there.
 

silk scarfs

All Star
Joined
Nov 23, 2016
Messages
1,128
Reputation
127
Daps
4,776
humans right now are robots we are all programmed threw our family city country religious movements race and a basic "humanity"

we love, fight be happy and sad completely kill eachother and come back togethor for the benefit of people who use us for there own gain

were already the perfect robots for those who are in power

we work and recharge ourselves feed and mantain ourselves like the perfect robot
 

Auger

Superstar
Joined
May 13, 2012
Messages
12,419
Reputation
2,835
Daps
29,506
Reppin
6ix
That's a total different dynamic. Plus the teacher didn't create the student per se, he just gave him the necessary tools, and with those necessary tools the student expanded on what his teacher taught him.

Expansion happens all the time with humans, technology, etc.

But me creating artificial intelligence meaning I built it from the ground up it can't be smarter than me because I created it.



When you ponder on that question it really is.
If you built a program with the intention of it being able to learn and retain knowledge on it's own then that "creator" & "creation" dynamic no longer applies. It has evolved past being a tool and has moved closer to sentience.
 
Last edited:

BlackJesus

Spread science, save with coupons
Joined
Nov 16, 2015
Messages
7,571
Reputation
-3,222
Daps
21,739
Reppin
The Cosmos
I think your problem is you can't see how radical this technology is and how much it will change the world in the next 20-30 years. You are thinking of this issue in present terms and worrying about things like rations.

The world works according to the laws of economics, aka scarcity. Actually everything in the universe is scarce and limited. A basic fact of life. It is built into reality itself. Immutable and unchanged.

If we are talking about the same thing (namely UBI or universal basic income) of course there will be rationing. Money is scarce.The only question is, how much will you recieve and who decides it?

Second your overly optimistic, fairy tale style projections of the future fail to take into account the negative possibilities and risks of AI making the technology potentially far more trouble than it's worth.

What happens if or when AI decides humans are an obstacle to its goals? Could such a thing be stopped? Could you stop a being that is 100s maybe 1000s of times smarter and faster than you? We would be fukked.

If we don't die of drug overdose from living a life of pointless ness and idleness via UBI we'll die from a skynet style robot apocalypse. There is no way out here. AI is a major problem.
 

Marlostanfield.

High On That Ray Charles
Joined
Jul 23, 2015
Messages
2,526
Reputation
-880
Daps
3,069
You aren't thinking logically.

Think of a genius. Is he/she smarter than their parents, the beings that created him/her? Almost always the answer is yes.

It's not even difficult to imagine how this would apply to A.I either since we've already made computers for years that beat human thinking in basic computations and specific functions. Think of a calculator. Punch any elaborate equation into it and it comes up with the answer instantly, exponentially faster than any human can figure out the same calculations. Now picture people creating an A.I. system in which those same principles of that calculator function are applied more broadly to other all-around systems of thinking. Once the sentience is achieved and factored in, its pretty much over from there.

God just used our parents to bring life into this world, to really be technical we're God children, so when you understand that, you can also understand how a parent son/daughter can be smarter than them.

And I get what you're trying to say with the calculator, but someone still created that calculator so that calculator is only smart as its creator.

I understand human curiosity to see if its possible to create a construct smarter than us but even the Matrix proved that theory impossible because construct goes as far as the human who created it.
 

Insensitive

Superstar
Joined
May 21, 2012
Messages
12,437
Reputation
4,885
Daps
42,334
Reppin
NULL
You bring up interesting points but they have all been answered before.

As you said, all we have right now are AI's that only good at a limited set of tasks that they have been specifically designed for. The holy grail of artificial intelligence research is "Strong AI". This is an artificial intelligence that can learn how to perform any task a human can perform and be super human at it. Yes right now we don't have Strong AI or General Artificial Intelligence which you allude to as "Super AI". That is technically the holy grail of artificial intelligence research. So the fact it hasn't been reached yet doesn't mean its impossible, it just means we haven't reached the end point yet. Strong AI is basically only a few steps from the singularity. What we are in right now is the build up to Strong AI.

With regard to Moore's law, people have been predicting its demise for decades now. And its still going. Now it is true its slowing down and we are close to the end. But shrinking transistors isn't the only way to continue to double computing power. The computer researchers have been preparing for the end of Moore's law and have devised new ways to continue to double computing power. The future will be 3-D computer chips. Read about in this article: Transistors will stop shrinking in 2021, but Moore’s law will live on

Well there's people predicting moore's law and it's demise for decades and the reality
of the continued doubling of price for that doubling of performance as processors get smaller.
If your company makes 55 billion a year and it'll take 1/3rd of it's finances to produce state of the art chips, which is
an expense that'll continue to increase ($7 billion to produce expensive chips WILL double to $16 billion and so on...)
it gets harder and harder to justify the money to attempt to double the performance. That's without mentioning there's
also the engineering issue that comes with the inherent difficulty of producing chips smaller and smaller chips.
 
Joined
May 16, 2012
Messages
39,600
Reputation
-17,856
Daps
84,289
Reppin
NULL
I disagree, AI or Supercomputing in problem solving for business, science etc. is already there but
it hasn't lead to widespread job loss. There's still a need for experts and human brilliance
and with discussions like this people tend to think AI would making thinking obsolete.
A lot of the use for AI or really computers today is they do the calculations and then humans interpret it.
I think that'll continue to be the trend until we create some "Super AI" that thinks for people, that's the real
argument here. It's whether or not a computer will supplant humanity as far as being thinking and creative beings
and thus "Solve problems".

Again, we're seeing moore's law fail as processors get smaller and smaller and more and more expensive to make.
If we plan for computers to ever "match" and "surpass" us in a decade and some change, a lot would have to change.
Economically, Technologically and so on.

Breh you are behind the times. I just posted an article saying that Moore's law ain't ending. They are just gonna stop with the shrinking of transistors in 2021 and just go to 3-dimensional computer chips. A technology they have been working on since 2006 in preparation for the end of the shrinking of transistors.

On your other point, I think you need to read up on some of the breakthroughs by AIs in the last year or so. They've started doing shyt people didn't think was possible just a few years ago. AIs can now recognize images better than a human. They can compose music and make art as @TrebleMan posted earlier. They have also begun to even create their own languages with human sounds and tones. And the most startling, an AI was given a still image and was able to dream up a video of what it thought would happen in the next few seconds!!! :mindblown:



I would suggest watching this video so you can see how fast artificial intelligence is progressing.
 
Last edited:

Marlostanfield.

High On That Ray Charles
Joined
Jul 23, 2015
Messages
2,526
Reputation
-880
Daps
3,069
If you built a program with the intention of it being able to learn and retain knowledge on it's own then that "creator" & "creation" dynamic no longer applies. It has evolved past being a tool and has moved closer to sentience.

Nah, it still applies you know why because the construct learning ability goes as far it's creator.

If you create the construct and place a program in it where it can learn on its own that doesn't mean the construct shall succeed its creator knowledge. It just means the smarter humans become so too the construct will also because it can learn as well. Its not the like construct can learn what we don't know.
 

TrebleMan

Superstar
Joined
Jul 29, 2015
Messages
5,592
Reputation
1,190
Daps
17,547
Reppin
Los Angeles
AI's today have authors, thus they have specific goals in mind and a very limited scope.
Using AI to analyze some sort of artist's paintings or greatest compositions then having that AI
create artwork in "That style" isn't new. Side note: I have an interest in music composition
and when submitting works to contests it is specified that you shouldn't use a computer composed
piece of work (as in one you had no hand in creating).
Creation isn't so mysterious that it cannot be duplicated and literally has nothing to do with whether
or not we'll create machines that'll completely render people obsolete, the idea that we're
marching towards a singularity and by 2030 we'll all be cyborgs starts to have to face some troubling
realities once we look at the stretching of silicon to it's limits, the real limits of batteries and so on.

"They" are humans.
"They" aren't living in a bubble.
And "They" have to grapple with the realities of producing hardware that would allow
the prevalence of "Super AI".

http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law

Transistors will stop shrinking in 2021, but Moore’s law will live on

Again, it's a complex subject which has no foregone conclusion, I think computing
will continue to be more like cars and planes, there to assist but not replace.
I won't say it could never happen but there's a lot that would have to change.

I'm not necessarily talking about us becoming machines/cyborgs ourselves, I can't even imagine that happening (but not doubting it either). That's a little too magical thinking for me right now.

What I do believe: more so that machines will be making a lot of decisions with much more accuracy than we can when given the same data a human is.

8 Ways Machine Learning Is Improving Companies’ Work Processes
Machine learning enables a company to reimagine end-to-end business processes with digital intelligence. The potential is enormous. That’s why software vendors are investing heavily in adding AI to their existing applications and in creating net-new solutions.

Because decisions are based on data and experience at the end of the day. A machine can store data and observe the conclusions, then store that data and make comparisons. It's not new, but it's getting a lot more automated. Even the human elements/interactions are all data at the end of the day.

There's a human element, there always is, but there aren't many who call shots anyways. If a shot caller sees it fit that employing a machine gets better results and is cheaper, they'll cut people off. Machines learning capabilities have taken of very recently.

The problem will be when many people have been cut off. Then that's where things like UBI come into play or face total rebellion.

Also there doesn't need to be a big distribution of hardware, programmers tend to plug into machine learning API's. Like Google's new TensorFlow CPU doesn't have plans to distribute, but you can make calls to their API to use it for your own or company's personal projects.

One of my recent projects I was going to see if I could grab data from an NBA API and use one of Google machine learning api's with it.

The Great Strengths and Important Limitations Of Google's Machine Learning Chip
In 2011 Google realized they had a problem. They were getting serious about deep learning networks with computational demands that strained their resources. Google calculated they would have to have twice as many data centers as they already had if people used their deep learning speech recognition models for voice search for just three minutes a day. They needed more powerful and efficient processing chips.

What kind of chip did they need? Central processing units (CPUs) are built to handle a wide variety of computational tasks very efficiently. Their limitation is that they can only handle a relatively small number of tasks at one time. GPUs, on the other hand, are less efficient at carrying out a single task and they can handle a much smaller variety of tasks. Their strength is that they can carry out many tasks at the same time. If you have to multiply three floating point numbers, a CPU will crush a GPU; if you have to multiply a million sets of three floating point numbers, the GPU will blow the CPU out of the room.

GPUs are ideal for deep learning applications because sophisticated deep learning networks perform millions of computations that can be carried out at the same time. Nvidia is the go-to chip company for GPUs that are designed for machine learning. Google uses Nvidia GPUs but they needed something faster. They also needed a chip that is more efficient. A single GPU doesn’t consume a lot of energy but when you have millions of servers running 24/7 as Google does, energy consumption becomes a serious problem. Google decided to build a chip of their own.

GPUs are ideal for deep learning applications because sophisticated deep learning networks perform millions of computations that can be carried out at the same time. Nvidia is the go-to chip company for GPUs that are designed for machine learning. Google uses Nvidia GPUs but they needed something faster. They also needed a chip that is more efficient. A single GPU doesn’t consume a lot of energy but when you have millions of servers running 24/7 as Google does, energy consumption becomes a serious problem. Google decided to build a chip of their own.

Google told the world they had designed their own chip in May of last year. They called it a Tensor processing unit (TPU) because it’s custom designed to work with Tensorflow, Google’s open-source software library for machine learning. A good deal of speculation about what Google had in mind for the TPU accompanied the announcement because Google did not provide very much information about the chip’s architecture or what it could do. That information came in a paper Google released last week and it is very compelling.

The paper compares performance of the first generation TPU (which has been running in Google’s data centers since 2015) with Intel’s Haswell CPUs and Nvidia’s K80 Kepler dual GPU. A deep learning network must be trained before it can be used to infer information from data. Google’s first-generation TPU was designed for inference and thus the performance comparison was limited to inference operations.

Google compared the chips in terms of speed and efficiency. Speed was measured as tera (trillion)-operations performed per second as a function of memory bandwidth. The TPU was 15x to 30x faster than the CPUs and GPUs. Efficiency was measured as tera-operations performed per Watt of energy consumed. The TPU was 30x to 80x more efficient than the CPUs and GPUs.

These are extraordinary numbers but several caveats must be noted before jumping to the conclusion that the TPU is the future of deep learning computing. Google’s tests were carried out using chips that were contemporary in early 2015. Google, Nvidia and Intel have all improved their chips since then and it is unknown how today’s chips compare. Still, the TPU’s advantages were so great two years ago that it’s unlikely Intel and Nvidia have completely closed the gap.

A more important consideration is the nature of the chips being compared. Intel’s CPUs are general purpose chips designed for flexibility and speed running a limited number of processes at one time. Nvidia’s GPUs are general purpose chips designed for running many neural net computations at one time. Google’s TPU is an ASIC (application specific integrated circuit) that is custom designed to carry out specific functions in Tensorflow.

The CPU has maximum flexibility. It can run a wide variety of programs including deep learning networks performing both learning and inference using many software libraries. The GPU is not as flexible as the CPU but it is better at deep learning computation, it can carry out both learning and inference and it is also not limited to a single software library. The tested TPU has almost no flexibility. It does one thing, inference in Tensorflow, but it does it brilliantly.

Chip deployment in deep learning computing is not a zero-sum game. Real-world deep learning networks need a GPU in the system to communicate with either GPUs or ASICs like Google’s TPU. GPUs are ideal for work environments where deep learning flexibility is required or the necessary ASICs have yet to be built. ASIC’s are ideal when a full commitment has been made to a software library or platform.

Google has obviously made that commitment to Tensorflow and the TPUs superior performance makes it highly likely that Tensorflow and the TPU will evolve together. The tight connection between the TPU and specific functions within particular builds of Tensorflow makes it unclear whether it's sensible for Google to market their chips outside the company. However, third parties that make use of Google’s cloud services for machine learning solutions can reap the benefits of the TPU’s exceptional performance metrics.

Plus with open source development having such a recent impact where everyone can be a part of the development opens the doors for a level of group thinking that was never possible and efficient as before.
 
Last edited:

Mike809

Veteran
Supporter
Joined
Oct 15, 2015
Messages
17,171
Reputation
4,674
Daps
87,994
Reppin
Bronx
I believe in the singularity , i just hope humankind can benefit from it and not find its demise.
Talking about A.I , yall should read " i have no mouth,and i must scream" , great short sci-fi story.
 

TEH

Veteran
Joined
Jul 24, 2015
Messages
51,234
Reputation
15,871
Daps
210,305
Reppin
....
I think your problem is you can't see how radical this technology is and how much it will change the world in the next 20-30 years. You are thinking of this issue in present terms and worrying about things like rations. The thing you need to realize is there won't be any rationing. The potential rewards of the singularity are so much greater than anything we can even imagine right now. 30 years from now its likely that everyone on earth will be "richer" than Bill Gates. Not in the sense of actual money. But rather in the sense of what they have and the sort of life that technology will be able to afford them.

I want you to think of the poorest person you know in this world right now. Do they have a smart phone? The answer is most likely yes given the mass proliferation of smart phones. Even kids in remote villages in Africa now have access to smart phones given how cheap they have become. The proliferation of smart phones has been called the information revolution. Its a revolution in the sense that everyone on earth now has access to more information than the richest and most powerful people in the world did just a few decades ago. A kid with a smart phone in Africa can look up anything and everything in human history using Google and he will probably have better information than the leading professors at Harvard would have had 20 years ago.

Now move this technology forward so that it involves not only information but everything else. Imagine rather than a "smart phone", that everyone on earth 20-30 years from now has a "smart box". This "smart box" can manipulate matter at a molecular level. Thus it can make anything you want or can imagine for you immediately free of charge. You want a new house, it can build it. You want a cheeseburger, it can build that too. You want another "smart box" so you can give it to a friend, it can make that too. Now you might laugh at this but there are scientists and engineers working on these types of machines right now. Its called nanotechnology. And we are right now in its infancy. Sorta like computers and cell phones were 30 years ago. Most scientists and engineers expect magical machines like this to be possible in 30 years. If this is shyt we can imagine being possible in 30 years because we are working on it. Imagine the shyt these super advanced computes will think up that we can't even fathom right now? This is why its called a singularity. It goes beyond even our wildest dreams.
Stopped reading when you said the poorest person on earth most likely has a smart phone ... :mjlol:don't be an air head ...
 
Top