Today we discuss the current state of AI (Artificial Intelligence), the science, theories, and philosophies around its future, and the potential dangers/concerns with the impending next “phase” of AI. Is it possible we can create a “computer” that is as smart or smarter than us in every way (e.g. human-level intelligence)? Many scientists, technologists, and philosophers say “yes”, and “soon”.
How do you see it?
Show Notes
- waitbutwhy blog post on AI
- Elon Musk: artificial intelligence is our biggest existential threat
- Is artificial intelligence (AI) biblically possible?
- Fun with AI
- ChatGPT – your friendly neighborhood chatbot. It can generate some pretty hilarious text, and even provide some wisdom/advice 🤔
- DALL·E 2 – AI that can instantly create realistic images and art from a description in natural language.
Show Transcript
0:04
[Music] hello good morning good morning how are
0:11
you I'm good yeah Mark just said he's ready as he's ever going to be that's right that's right we started the
0:18
recording ASAP that's right cuz I don't want to be less ready than you are right
0:23
a couple minutes later that was the moment to start
0:30
did I did I hit play in the right time or recall oh yeah it was the right moment it was the right
0:35
[Music]
0:41
moment welcome to how I see it with me Mark Pratt and Justin Sternberg this is
0:48
a podcast that works to countercultural polarization through
0:53
thoughtful [Music] conversations and this is the right
0:59
topic for today is it yes all right it is the topic for today it is the topic
1:05
for today so therefore the right one yes so the topic is we're going to talk
1:10
about artificial intelligence yes we are and Mark is excited I am is I am I think
1:17
I think it's good because uh in some ways you're going to recognize who who
1:22
really knows what they're talking about with the artificial intelligence of sorts mhm this guy who wrote this blog
1:28
Post Yeah well yeah and then uh the ability to recognize our uh our our
1:34
differences is a good thing I think it's another topic where we can express our differences in such a way that says hey
1:41
what about this and what about that and yeah I think so much of it can sound
1:46
like science fiction yeah you know yeah and it's like some people are going to
1:52
say oh wow artificial intelligence and other people are going to say oh this
1:57
might be something I can skip you know what I'm saying has that kind of feel to it just based on that Science Fiction
2:04
it's like that's not everybody's thing I think of um Marvel you know and you know
2:11
what was uh Iron Man Tony Stark you know and it's just it's just kind of a an
2:16
interesting in topic to to think about and how uh what was the what was the guy that he created yeah well Vision Vision
2:24
yes but but Vision fought that other guy mhm uh to that both yeah both of those
2:30
characters are based on yeah picture L version AI basically ex yes so that's
2:36
what kind of comes to mind for me you know yeah and Captain America wins out yeah that's right that's right
2:43
always but no seriously share your thoughts yeah so there's this um blog I
2:50
really like called wait but why MH by a man named Tim Urban and he just writes
2:56
long form posts about passion topics of his where he'll just deep dive sure and
3:02
he is not a scientist and he's not you know he's just curious and so he'll put together these posts about things
3:09
that write about them in a way that to me is very appealing like he's very interesting a little bit quirky a little
3:15
bit funny he he uses sck stick figure drawings to illustrate his point and stuff um and so he wrote about
3:22
artificial intelligence big long post he split into two um but I just found it so
3:29
fascinating and I I particularly just to spoil the lead like kind of want to talk
3:34
about it from the perspective of Believers right and and believe you know we know who our higher power is sure and
3:41
kind of how how will that how does that work that kind of idea like I think that would be the fun part
3:48
to talk about but in order to get there I want to do some setup so yeah
3:54
essentially we're on this progress uh bar chart right sure where
4:00
from you know the time we were created until now we've been progressing in terms of our human
4:06
intelligence right and do to clarify that no no no no okay to clarify that
4:13
and and this is what he describes and the thing too is over
4:19
each you know whatever grouping of time you want to call it right um we're
4:24
better able to leverage our previous knowledge sure and that's the thing
4:31
that's growing exponentially so he's using the term exponential sure so he's
4:37
essentially saying you know one of one of the illustrations he uses this it's kind of funny is if you went back to the
4:45
year 1700 I think that's the number you use sure and you take a guy and you put him in a time machine bring him back
4:51
with you that now he would be so blown away with what what's happened in those
4:57
200 years well you know two however many years sure that he would die he would
5:02
literally die he would just be blown away and die right yep now this is all fictional the joke right but right now
5:09
if you went back to you know he after he died came back and said you know I want to do that same thing right so he goes
5:16
back to his time and then he piggy you know he Leap Frogs excuse me into the year 1500 and you know brings a guy up
5:24
to 700 says look at all and you know the guy's like wow you yeah lots of things have changed or whatever but it's not a
5:30
die level of progress the way he describes it right it's like okay things have changed but it's not mindblowing to
5:38
the degree that he would die in order for the 700 man to cause a die progress
5:44
level he'd have to go back let's say a thousand years right sure so in this guy's blog post he's talking about
5:50
Hunter gather that kind of thing right you know whether you ascribe to that
5:56
idea or not but the point is this guy you know he's beating his chest or whatever and he you bring him to 1700 he
6:02
might die at the level of progress right assuming he could understand it that idea the point is it's exponential
6:09
growth for each generation or you know each you know die progress level or
6:16
whatever and so the premise of this blog post is that we're at the edge of this
6:22
essentially Cliff straight up right because we have continually gotten
6:28
better at building on top of the foundations that we built uh with our technology with our quote unquote
6:35
intelligence to answer your you know previous question I definitely don't think we're more I don't know what this
6:43
the author would argue I think he might argue a similar thing but not quite that
6:48
we're basically the same intelligent it's just that we have the advantage of technology
6:54
and and keeping track of all the things we've done and then by keeping track we can now build on those things so nobody
7:01
alive today truly understands how every part of a computer works gotcha right we
7:08
basically go well this is a microchip I know you plug the microchip in here and I understand you know let's say the the
7:16
Wi-Fi part of a computer and I know how that works to to a large degree right and that's the part I work on but I
7:22
don't know how a microt trip works or vice versa sure or but there are individuals that do that understand
7:28
parts of the lap each each part as a human race there are individuals that understand
7:35
theoretically yeah yeah uh yes there yes yes because we're continually designers
7:41
if we would improving on it yeah yes but yes but the point is there's no one human that could you know build a
7:47
computer from complete scratch they would depend on the existing knowledge and you knowa does that make sense yeah
7:55
yeah um so anyway we're we're we're at this point we're we're not working on what's called artificial intelligence
8:02
and this is kind of where the cliff can come into play depending on whether you
8:08
believe it or not you know yeah um and so he describes you know a spectrum of
8:14
artificial intelligence which he calls calibers where caliber one is uh
8:19
artificial Naro intelligence and that's essentially where artificial intelligence kicks in
8:26
in One Singular sure vertical yep right does that make sense yeah his his
8:32
example was the aspect of creating a computer that can beat our best chess
8:37
player yeah but all that computer does all that intelligence is designed for is to play chess it can't necessarily do
8:45
your financial right report they can't tell if you look pretty or sleepy or
8:51
groggy exactly it's not going to necessarily read your emotions or you know scan your eye retina or all that
8:58
stuff and and from a security level it's just designed to play chess yeah and to
9:04
that degree we have and he calls it Ani artificial narrow intelligence sure and to that degree we have it's one specific
9:10
area yep we have an ani all over the place in our world um he also goes on
9:16
later to describe like the algorithms that feed you your social feed and Facebook um as well as the ad system
9:24
where you look at something here and you go to another site and it serves you some of similar things right like like
9:30
there's all kinds of um artificial intelligence at play and um machine
9:36
learning is a new term okay that we're calling it where basically uh you can teach a computer to
9:43
learn about a specific thing and then make judgments that are you know theoretically better than a human
9:49
because they have all this input and they can you know you you teach it over time you just say it's basically like
9:55
the schedule in your phone in other words if you type a name in it will give
10:01
you the last time you saw that person or something along that line and it will and it will plug that in almost giving
10:08
you the exact time you need so you're not having to scroll through to find
10:14
yeah it's another great example and I um yeah and so essentially that we're there
10:19
already and that's you know nobody's wowed by that although really we're not wow because it doesn't seem that
10:26
impressive but if you kind of understood it truly you would go wow you know like this is
10:33
uh powerful but almost there should be a level like scary right and I exctly I
10:39
think that's yeah cuz I I'll admit I tend to I'm I'm it's not that I'm it's
10:45
not that I'm what was the first one kind of bored with it or not impressed unimpressed it's like oh yeah yeah I'm
10:53
not sure I always like this yeah I mean I understand how it works CU I've done it over and over again and you know for
11:00
my schedule I'll meet a person typically at the same time but it's like each time it happens I still think about it it's
11:06
like I'm not sure I like this Mark has not welcomed our robot
11:13
overlords and and and in all honesty I'm not even sure I agree with the the human
11:20
progress as a line I would see it as a sickle excuse me as cular or as Cycles I
11:26
think yeah I'm not saying I think all intelligence is still the same based on
11:32
creation but I think there are times when we see people uniting and I would
11:37
say there's still times today where we can go back and say okay we don't
11:43
necessarily understand with intelligence or without intelligence how they did that I mean granted we can think of you
11:50
know the the the Egyptians and the pyramids and we recognize that slave labor type Dynamic and more people
11:57
together but yet I still think there's parts of History you know that we don't
12:03
we when we look back you know there there are those who would say oh alien life forces came to make this possible
12:10
yeah cuz we can't figure out how they did it and that kind of further enhances the point which is we've always again my
12:18
my how I see it we've always been the same level of intelligent that's how we were created the difference is they
12:24
didn't have a way to record how that was created and then the next Generation go oh we can if they could build a pyramid
12:30
this way we could build it this way and make it better where now nothing goes away essentially right like every not
12:38
you know information right information about how right right um and again
12:44
blueprints are created whether whether you call that you know software or sure what whatever that looks like blueprints
12:50
are created so that I don't have to truly understand how a pyramid is made I just plug in this blueprint and the
12:57
thing gets built does that make sense the very teams of the various manufacturing plants whatever they come
13:02
together and they produce this thing that I don't have to know how to make sure you know but they didn't have that
13:07
then right right sure one one could argue the Egyptians maybe didn't want
13:12
that to be recorded they wanted it to look you know I don't who knows right I'm not an Egyptian but no um but I
13:20
think that again that's the point of this post what agree or not it's I just think it's fascinating but just I agree
13:28
it's fascinating yes don't don't don't misunderstand all right yeah yeah so Ani
13:35
artificial Naro intelligence is you know it's kind of happening all over the place but all around us many people many
13:43
organizations are actively working on the next level which is artificial
13:49
general intelligence sure and this is basically human level artificial
13:55
intelligence essentially a computer that can think like us and what does that mean well Think Like Us in every way not
14:01
just can play a chess against us but can also figure out if you're sad or happy
14:07
or you know but also can determine you know look at a thing and say that's you
14:12
know that's a shadow or that's a you know an actual 3D picture or that's a 2d
14:17
picture like there's just so many things all that we just do intuitively and naturally you know uh he describes like
14:24
the things that are hard for us are easy for computers right like big calcul and stuff like that the things that are easy
14:31
for us are basically impossible for computers those natural intuitive systems where you like raise your arm in
14:36
front of you and you look at it and it's just like all of that was for free you just there's your arm you look at it you
14:42
know put it back down sure and it's like that was require no thought no effort and yet for a computer it's it's
14:48
impossibly difficult right although you know they're getting better at animatronics and computers and robots
14:53
and all that and they're getting like if you ever seen the Boston Dynamic videos where they create robots
15:00
I'm familiar I'm familiar with it based on and that was where I was going you know it's like the robotics part of it
15:06
is the most difficult part you know to cuz our our body it's not though well
15:11
I'm saying from a muscular standpoint to be able how our bodies are designed as
15:17
one muscle tensions there's equal and opposite relaxation in the other and
15:23
it's very it's it's been very hard for robotics to duplicate that which always
15:29
makes you know robots look stiff or you know that and that's just something that I've been fascinated with over time is
15:36
that progression yeah to be as they look less and less stiff yes yes you know to
15:42
where they're they're they're becoming smoother and that kind of thing you know from a yeah from a human replica
15:50
standpoint yeah and they are basically leveraging AI to have the robot teach
15:56
itself sure so essentially when I jump and I fall do figure out all the math
16:03
that was involved in that and Mark that as a fail that that doesn't mean you don't but and then you change a variable
16:10
and you jump again sure still fail okay you know what needs to right and that's
16:15
just so much math and we you know our brains just like I give up right but for a computer it's like you just keep
16:21
trying keep trying do the math Mark out what works what doesn't work right and you can I think visualize in your head
16:27
how that can learn right like it it can adapt to go okay this is how you jump
16:32
yep and then they put a box in front of it now start over right like exactly and
16:38
they've been doing that for decades now and so they got you know robots that will like again the Boston Dynamics
16:45
videos where they can do forward flips backward flips jump across you know holes jump up and down boxes do
16:52
synchronized dances yeah like it's it's crazy they got that little dog robot that can you know open doors and go in
16:59
and rescue something come back out know and those are real like those are things in use right now exactly yeah yeah um
17:07
but that's definitely not artificial general intelligence which is intelligence at the level of a human you
17:13
know you might describe that as intelligence of an ant maybe you know it's still fairly narrow yeah in the
17:21
concept true of the human body yeah in other words certainly broadening yes no
17:26
doubt yes and I and I think part of that two is the the advantage over you know
17:32
computers or robots or that kind of thing is humans would give up yeah on many
17:39
tasks before you know they'd learn something as as human I'm myself
17:44
included you know if I if I'm going to I'm not going to do something that requires me to fall 752 times you need
17:51
motivation yes right yes there's the key difference you could spend s you know
17:57
Addison right in the famous stories how many threw in the trash I don't know whatever LS right
18:04
basically that 700 whatever you know we can but we need motivation precisely I
18:10
would admit that so I think I mean you just touched on one of the biggest concerns Andor fears of artificial
18:15
intelligence is that they don't need motivation sure and basically you can
18:21
program the do a thing and they'll do it and they don't care they're not thinking they're not having feelings about it yet
18:27
right according to this Theory um and the scary thing there right and
18:32
you know I think this is a level of artificial intelligence that most people are afraid of right now is not an
18:38
intelligence that's smarter than us in every way like artificial general
18:44
intelligence where they're essentially as smart as a human because we that would be scary but that would be a
18:49
different kind of scary what we're scared of is a dumb AI doing smart things sure right so they launch all the
18:57
nuclear codes because they you know whatever there's a perceived threat yes exactly that is the thing
19:04
that freaks us out about Ai and where we you know we have it continually train itself and it trains itself into a
19:10
position where we have to get rid of this human thing cuz they're you know basically cockroaches on this Earth You
19:16
Know ruin you know destroying you want to play a game what's that it's a it's a movie which one uh
19:23
it's the one in the where the the computer takes over the intelligence and
19:29
wants to send a and the kid gets on oh I forget the name of it but yeah it's uh
19:35
never mind yeah you'd have to be going to bug me it's okay youie yeah yeah yeah you'd have to be my age to get that like
19:42
Tron no it's not but yeah be 200 go back but moving along I apologize
19:50
I apologize that was what came to mind that's fair no now I really want to know but yes essentially right and there's
19:57
actually a lot of new movies there's one called XM M no and it's essentially an
20:03
AI robot that you know basically learns and learns and learns and then said basically humans are the problem and
20:10
it's in a uh environment where that's just two people it's a robot and a human
20:15
sure and at first she's very affectionate and eventually she's like fake affectionate sure and she's like
20:22
trying to figure out how to get rid of this human that's a problem right yeah no that's typically where those kind of
20:29
go is it comes full circle and it's like humans are the problem again because we're the aggressors and we're trying to
20:36
you know take over and it's like this other thing we see it as an enemy but
20:42
yet it's really just a kind loving thing that just looking to promote its own
20:47
behalf as well right so yeah right and artificial intelligence gets things wrong all the time obviously already and
20:56
a class not classic an example I've heard uh in the past is essentially uh
21:02
artificial intelligence that can read facial expressions or whatever like even your phone scan thing doesn't do as well
21:08
with black skin it's like epic fail you only had white people in the room designing that and it works great for
21:14
them sure right and so then it basically marks that person as no you're not allowed in your phone you're basically
21:21
you're a bad person right if it you know kind of put it essentially zeros and ones black and white you know what I
21:26
mean and so you take that to a scale that's scary and it's problematic right like oops we didn't include that
21:33
scenario um so that's kind of the general I think fear right is uh dumb AI
21:40
doing smart things sure um you know or vice versa smart AI doing dumb things
21:46
right yeah that that creates the fear yeah and what he describes in this post he's saying we're kind of completely
21:52
missing the point because um those are scary things but what's more scary is in
21:58
AI that more intelligent than us mhm and then we now have you know essentially
22:04
robot Overlord right like we we have to abide by their rules MH because they
22:09
make the rules they can turn the water off they can turn the water on they have they have access to everything right
22:15
sure theoretically that's the idea that's the idea yeah exactly and so
22:20
essentially the theory is once they reach artificial general intelligence because of the advantages they are
22:28
already have over us like not needing to be motivated to continue on a task or um
22:34
also he talks about the power it requires to um the the power our brain
22:40
uses is essentially pretty minimal but we have access to power you know much greater than that to feed to them sure
22:47
right so they can you know there's just a lot of advantages there's also disadvantages but no doubt you know but
22:53
with those advantages they're not going to stay at artificial general intelligence at AGI level for very long
22:58
right cuz we're now in exponential State and they're teaching and learning themselves so you know they have the
23:04
advantage you know again this is the theory this is the you know where they can continually improve where we can't
23:11
we're kind of bound to our quote unquote Hardware software and we're kind of limited although we can build on what
23:17
the last guy built which is what I was talking about we're still limited to
23:22
that sure where an AI essentially is not limited is the idea is the idea right so
23:28
then you get into what they call artificial super intelligence and that's basically the one level above you know
23:34
human general intelligence to Infinity right right and and it's interesting at
23:40
that level I like the it it's the the the line that leads in of course Oxford
23:47
philosopher so you know in my perspective that's that this is philosophy that's right you know yeah
23:54
yeah to a certain degree there is right there's so much philosophy exactly right
24:00
yeah and I and I think that's the part that we have to wrestle with yes because I think we can because and and I guess
24:08
that's where it comes back to for me it can almost come back to Marvel because
24:14
it's really uh it it's philosophy it can even be entertainment you know to think about
24:21
and and it's and let's be perfectly honest there are people out there not necessarily us but I'd say smarter than
24:27
us even you uh but to the point that that they get paid to think about what could be next
24:36
yeah yep and from a from a philosophy standpoint and I and I think that's a
24:41
that's a great thing don't get me wrong but like you say that's not me yeah and so yeah but I think that's a I think I
24:49
enjoy when I read something along that line that says okay yes this is what we're talking about yeah philosophy
24:56
philosophy yeah and you know because AI the people working on that next those
25:03
next levels there are so many unknowns sure right so that so they there's
25:08
essentially a very large broad spectrum of people working in this
25:14
field philosophers and scientists and who knows what else right but right the idea is uh yeah you got to get
25:22
philosophers in there so you know well and and in some ways I think it it lends
25:29
to your point I think philosophers create that
25:34
motivation to a certain degree yeah this could be and that excites people to say
25:41
wow and then your scientists your designers it's like okay how can we do this or is this possible and other
25:48
people are automatically going to hear that and they're going to say this is scary I don't like this well to be fair
25:54
there's a good contingency of philosophy for Endor scientists that are saying
26:00
this is scary this is a big deal this we need to be very careful and so it's interesting because there are
26:07
contingencies of AI developers and and and you know innovators that are the
26:13
first category of this is exciting this we got to get this done this is going to be amazing this is going to solve cancer
26:19
this is going to solve death this is going to you know and then there's a camp of War Games was the name of that
26:25
movie Sor war games right yeah I that sounds familiar I don't think I've seen it but thank you no problem I didn't
26:32
want you so the first Camp is kind of the the Sky Rose Rosy or whatever you know sky is blue you know vision is Rosy
26:40
whatever the other Camp is this is very dangerous so let's do it let's do it but
26:45
in a way that's as careful as possible because they're still doing it right this other Camp there's other camps that
26:51
are going to do it so we have to try and get there as quick as possible but in a
26:57
way that is cognizant of the risks and cares about the risks sure uh so it's
27:03
not like a hands off type of thing because they know someone's going to put their hands on it so let's do it in a
27:10
way that's responsible does that make sense yeah um and and even Tony Stark
27:15
failed at that yeah yeah yeah I mean those yeah those movies are great in terms of kind of setting up the the
27:22
issue yeah um but Elon Elon Musk has some different his hands in some different company
27:28
no doubt yeah where he's basically one of those people that would say this is very concerning we need to do it the
27:35
right way responsibly whether he means that or not you know that's besides the point but that's kind of that camp um
27:42
and then even that it's like what is a person's motivation are we always as
27:48
altruistic right as we would like to think we are
27:54
so to get through all of that we're we're to a point where there's this artificial super intelligence that's
28:00
smarter than us and we know you know are not the top MH Top Dog right and uh he
28:07
said ASI artificial super intelligence is the reason the topic of AI is such a
28:13
spicy meatball and why the words immortality and Extinction will appear in these posts multiple times right
28:19
because you know once we're not the top it's they can either enable some unlock
28:25
some things that we never could and they basically said you can be our Immortal
28:30
pets right sure or they can say no we're knocking you off the total you know off the balance beam of existence and you're
28:38
not worth keeping you know kind of thing we don't know right so all that did say
28:44
the real question the the polarizing is do you think it's possible
28:50
do you think that cuz in my let me just share a little bit of my
28:57
persp for it I think there's a Tower of Babel in here somewhere H right I hear
29:03
you and to explain that reference the Bible you know there's a Bible story where you know
29:09
before uh you know the world was split up into different languages and countries or whatever we kind of all
29:15
left the boat right Noah's boat I think it was after right and we all just stayed in the same basically but one big
29:21
happy family and as a result we started to build this Tower in a town called or a city called Babel and were very proud
29:28
of it and we said this is going to be amazing this is going to reach the heavens this is going to rival anything God's ever done like and we can do this
29:35
because we're so smart and we're you know doing this together etc etc and we don't have the the limits of we don't
29:44
have limits essentially sure and they built built built and God basically said snap and then you
29:51
know broke basically broke everyone up into different languages so you were working side by side with someone laying
29:57
a brick and all a sudden you he's talking a different language well that puts a hamper right in any kind of development process because you can't
30:03
communicate and that basically destroyed that right that that was the end of that and as a result that's how you know
30:10
countries and nationalities form because the ones that could have understand each other kind of separated out and say I
30:16
sure the rest I'm not sure about but let's let's go do our own thing right mhm so I'm saying I think there's a
30:25
there's a piece of this right where well I do I don't know I don't know that's no
30:31
that's why I'm asking let me have your thoughts oh no I'm uh it's a I guess I'm pragmatic to a certain
30:40
degree because I realize you know for every science fiction movie there's another there's another
30:47
um apocalypse type movie mhm you know that shows um you know cities being overgrown
30:55
by Vines and everybody's back out in the country Countryside and you know there's no electricity the grid is just so small
31:03
you know C something happened you know and I'm recognizing a lot of this AI is
31:09
dependent upon electricity grid you know circumstances like that and um and even
31:16
today you know I think about you know third world countries at times you know
31:22
and that you know people IND I'll call it if I may you know like a indigenous
31:27
people who who are you know just out completely
31:33
disconnected and those people from my in my opinion will continue to exist the way they've
31:40
always existed outside of artificial intelligence from my perspective you
31:46
know because I think you know AI is definitely a a first world thought yes
31:54
you know to where we yeah but first world thought have impacted the whole world many times throughout history
32:01
right so whether it's colonization sure but I I also think if I may on the other
32:06
end of the Bible you know when we talk about Revelations now we can talk about what John sees and he's still talking
32:14
about horses mhm now is that something that looks like a horse that's mechanized you know or was he describing
32:21
that H from his his era as something that looked like a horse mhm you follow
32:28
me that kind of thing and that's and that's kind of where I he also describes a lot of things that are nonsensical
32:35
right like scorpions with heads like a lion like bird like yeah like the size
32:40
of a horse but yeah heads of a lion yeah like yeah and I'm open to that you know I mean it could a helicopter you know
32:47
look like a scorpion you know that kind of thing you know I'm yeah or to stay in this AI vein right like they create some
32:55
sort of helicopter like thing that look for all intents and purposes is just a big insect us it's mechanized it's
33:02
whatever maybe even has organic parts right because again once you're smarter than us no what's the limit you know I
33:09
think that's his big concern his big you know thing is just like just see know
33:14
we're we're real close to stuff you can't even imagine right right and I
33:20
just I I just think it's so fascinating I don't necessarily agree though because again you know his point with the
33:27
exponential growth is that we're about to hit a Tipping Point right correct where bam it's exceeded past us and now
33:34
artificial intelligence is taken over and it continues to grow
33:39
exponentially which is the the crazy thing yeah yeah um we saw this
33:45
illustration in this post where it's a animated gift right where it's water
33:50
it's the Lake Michigan and all the drops of water in Lake M Michigan and it showed uh basically animation of the
33:58
time span so like 1700 and the W you know Lake M Michigan is empty sure and
34:04
what was the numbers uh hang on I got a it right there here it is yeah it's
34:10
basically calculations a second so in 19 it's going through 94 2000 2006 and
34:18
we're barely filling it and then you hit uh 2018 and up to 2025 and the whole
34:25
basically the whole like just instantly fills like That's The Power of exponential growth which people truly
34:32
can't understand unless they've experienced it in one way or another um and so that's why um you know
34:39
Investments are so important right oh sure very much growth yeah from from where it was 21 to 25 you are correct
34:45
and then it just fills the whole lake all of a sudden yeah yeah yeah so that's uh yeah so in terms of once but and what
34:54
what what this illustration fails to demonstrate is it doesn't just stop once the lake is filled mhm right right to
35:00
your point the exponential growth continues to be exponential and we have no idea what that looks like after that
35:06
you know yeah the C's out of the bag you can't put the toothpaste back in the tube you know it's like this thing's
35:12
going and you're not stopping it you sure um so if you're not stopping it of
35:19
course at that point you know I think there again from my perspective the Sci-Fi part comes into play because now
35:26
we need to live on other planets and we need to you know develop other areas to live and I think
35:33
sometimes and the some of the AI for me granted it's with us in a narrow form
35:39
that kind of thing currently and I really you know don't necessarily have a strong issue with that but there is a
35:46
part of me that can even kind of tie some of this to like
35:52
a well overpopulation you know that there is a
35:58
movement that says yes you know and and granted different people are going to have different it can be a polarizing
36:04
topic you know that yes we don't have enough there's too many people you know
36:09
and I think sometimes for me AI can be tied to that because it's about being
36:16
able to end world over population and H you
36:22
know because something else is going to kick in and provide that balance for us
36:27
that we need and and I guess that's how I see it
36:33
sometimes it gets connected and I'm not saying yeah I get that's that's that's how I see it yeah you see that as how
36:40
you think it will go you me or no I see that as part of this um model that we
36:47
currently have you know in in in government in certain movements that
36:54
basically are concerned about that you know and this a I think that's a part of
37:00
the development or a purpose behind it is that we can end all of these human
37:05
type problems because a computer will fix it you follow me and we can end
37:11
world hunger we can end poverty we can end and I'm not sure that's that's part
37:18
of the design yeah for our Earth you know for that that's just my personal
37:24
perspective on it I I think and here again I
37:29
I may sound you know resistant or anti
37:35
and I'm I'm I'm really not but probably my questions or my thoughts as I've
37:42
thought about it you know probably my questions may come across that way MH
37:47
and it and it's like that's you know yeah that's that's kind of I I I think
37:53
it's fascinating to talk about M But there again I I'm not I'm not a
37:58
philosopher MH so I'm okay with being able to say I'm not and so
38:05
therefore I'm going to probably move on with some of the things that I do have
38:11
the ability to recognize and it's in in some ways it can be kind of like we we
38:16
spoke about government you know yeah I see it as bigger than I am so I'll go on
38:23
and I'll be responsible for my circle of influence yeah but once it gets to that level it's like
38:31
yeah what's Mark gonna do anyway I'm not even updating my phone right now you know don't worry AI will do it exactly
38:38
exactly and I hate it when they give me that new update and then it changes the other stuff that I got used to but
38:43
that's just my yeah but bringing it back yeah and then so then is that is that
38:50
your full-on perspective on the uh if you will the uh the super intelligent
38:55
artificial super intelligent Well he kind of you said Babble I I wanted to bring it back for that okay
39:02
gotcha he kind of tips his hand a little bit in the article when he says a large
39:08
contingency of you know people who are in this field believe this is close sure
39:13
between you know artificial general intelligence where a computer is about as intelligent as a human the kind of
39:20
the furthest projections in the mean is like 20160 okay they're saying at at the
39:26
latest okay but more like 2040 okay maybe even you know sooner right because of
39:33
exponential grow um and but the big sticking point is we
39:38
don't know we really don't know how far we are from it whether it's possible we
39:43
don't know what we don't know sure and so that's the part to me that um I think
39:49
I I I think there's more to that than this particular article is letting on
39:55
cuz this article is basically saying this a big deal everyone should be thinking about this like it's so close
40:01
maybe sure and you know all the leading experts agree this is really close and you don't even know about it right you
40:07
know and so that's kind of what this post is about the way I see it is again I think we don't know what we don't know
40:13
and I think there's some insanely difficult things that they don't know about yet it's going to stall progress
40:20
is the way I see it kind of that Tower of Babel the languag is it's like well now we got to relearn how to to
40:28
communicate it it took us from Babel until the internet right to be able to collaborate effectively maybe not the
40:35
maybe not the internet but because I think we've been collaborating effectively with other language speakers for a long time but sure I think the
40:41
internet really kind of has boosted the ability to collaborate globally sure
40:47
right to maybe a similar degree as Babel I don't know no and I and I think you're exactly right in that progression and I
40:55
think and I'm not I'm not equating the two don't get me wrong but when I think
41:01
of AI to a certain degree I do think of war on terror you know because there has
41:08
been this Global Connection to fight this war on terror so therefore
41:16
information is being filtered in such a way and you know these logarithms have
41:21
been created in such a way to be able to filter out all of this information
41:28
in such a way that says okay is this threat credible you follow me and I think
41:34
that's a part of our connected intelligence if you will and I think it
41:39
also the the point you alluded to earlier I think it's also can be dangerous because a person like you said
41:48
dark skin versus lightskinn wasn't not you know a person who might be
41:53
expressing a certain anti-government opinion could also be identified you
42:00
know in this and and I think when sometimes you know I'm I'm equating it to what Facebook and you know and these
42:06
other entities can do that basically filter all this information to feed you exactly what you want to hear mhm and I
42:14
for me that's that's a part of this intelligence the artificial intelligence
42:20
Dynamic mhm that's still very much in the a department I would agree in the
42:26
sense uh dumb computers doing smart things or vice versa sure but yeah I
42:31
mean I agree with you I think you know uh probably those same algorithms that
42:37
would flag a terrorist or whatever probably would have caught you know Benjamin Franklin George Washington
42:43
exactly that's my point yeah or you know I think that's fascinating and someone
42:49
who is basically just recognizing um independent yeah I was I was just what
42:55
would there a federalist if were if you were you know to be able to say yeah
43:00
this this bigger government design isn't my thing MH and so you know that that
43:07
that's kind of what how I connect it or and I I am so back to your point though
43:14
um you were talking about something that could stall this progression now I need
43:20
to know is there any one thing that you are specifically aware of as you think
43:26
about it what what might be one of those things fascinated I mean essentially
43:32
they describe the difficulty of I mean we talking about robotics right how long
43:37
it's kind of taken us to figure out how to make a robot kind of move like a human MH and movement is physics and so
43:45
you know in calculations and math and so it's hard but we've been doing it right sure um we don't even know where to
43:51
begin with emotion you get what I'm saying oh we don't even know where you talking about people in
43:58
general yeah for instance people right uh but yeah just
44:04
like they need the computer to figure out how to do that cuz we don't even
44:10
know where to start so essentially there's already a non-starter in a sense
44:15
I hear you right where but I also don't necessarily think that's the thing right that may not be the thing the computer
44:21
might just be able to read faces and you plug a million movies into this thing and a million pictures and you know
44:28
whatever million is a small number just the whole planet's worth of information it eventually goes okay sad is this
44:35
right right right and it can figure it out you know and it can replicate it yeah and it might even program itself to
44:42
be able to be set right sure again these these are just things we don't know you
44:47
know because it's requiring us to depend on something else doing it that we said
44:53
Hey Now do this exactly you know so to me that's kind of the big when you say is there a thing the thing
44:59
is where it's we can't know yeah you know cuz and
45:05
I I apologize because I keep coming back to movies I don't know I haven't even seen that me but I think of it in the
45:11
terms of the old Terminator MH with Arnold Schwarzenegger you know to where
45:17
there was this glitch live well no there was this glitch that made him the
45:25
Terminator and other people could notice if you looked closely that he wasn't
45:30
able to assimilate to empathy and emotional to where those emotional
45:36
Dynamics tended to give him away mhm so if someone was observant enough they
45:44
could tell the Terminators from actual people because they were just unable to
45:51
connect with certain emotional cues yeah and I and I
45:56
I I think think that's the point you were bringing up to a certain degree it's like that becomes an obstacle mhm
46:03
But there again that's on my current level of thinking but that's yeah I I'm in agreement with you and uh I guess in
46:10
some ways I'm honestly uh surprised that we are in as much agreement as we uh as
46:16
we are from a you know it person to a non technical person I
46:23
was yeah that's true I mean I think it's just so theoretic iCal that it's hard it's not about it versus no I hear you
46:30
it's just it's like you said it's philosophy right so that yeah it you know it's interesting you know talking
46:37
about the Terminator mhm that that's probably another challenge which is kind of human ambition to leverage what we've
46:44
gotten so far right like Oh you mean you can create a machine that will do that
46:50
terminate well that's good enough you know and and kind of take that off and kind of wreck things right
46:57
that kind of sets us back yeah again you know seems biblical to me in the sense
47:03
of like we can't control ourselves and we wreck our own progress as a result of that we set ourselves back a hundred
47:09
years you know doesn't surprise me at all if that were to happen you know what I mean no I do I hear where you're
47:15
coming from yeah because because our our Ambitions are to be like God yeah and
47:22
yeah we can see how he frustrated in that example yeah and who's to say that
47:29
he can't and I and I would say there might be a uh I'm not saying that
47:34
individuals who tend to you know promote or philosophize
47:40
about artificial intelligence might be anti- good but I think we factor that in
47:47
that God is we we see God as our higher power you know Jesus Christ is our higher power of course but you know
47:53
being able to recognize anything that would be lifted up to be higher than
48:00
that will tend to experience frustration yeah yep from my perspective and right I
48:08
agree completely um yeah it's you know back to the scenario about us kind of
48:15
ambitiously ruining our own progress speaking of movies that made me think a Jurassic part right okay sure
48:21
where they create this amazing Park from science and they have these dinosaurs but you know that one guy yeah you know
48:27
that's one company Y and then the storm happens and he drops the thing in the mud or whatever and then you know sets
48:33
up the next 22 movies in the franchise right was basically frustrated the
48:39
progress of what was going to be fascinating amazing progress for us and
48:44
we went backwards because human image I think it's a good movies are a great you
48:50
know philosopher tool exactly yes cuz they create this picture of what it
48:56
could yeah yeah and and it's interesting as as we're kind of processing this I'm I'm
49:03
processing it a little more with you it's like it's funny because how how movies themselves create this balance if
49:10
you will yeah you know between yeah we we go to other planets
49:16
we conquer those you know and you know I'm just thinking a different clips that
49:21
I've seen and then you know on the other end is this process to where yeah all all technology
49:27
doesn't exist and we're back to bartering you know and that kind of thing and then you can put zombies in
49:33
there too you oh yeah of course AI zombies
49:38
exactly but yeah so that being said um I appreciate the discussion it's it's it's
49:45
interesting to think about and I think there's a part of our world that you
49:51
know has artificial intelligence and that and I you know unless something
49:56
major happens you know from electrical standpoint you know I still I still
50:02
think uh Power would be a interesting uh Dynamic to talk about but if if you put
50:07
something on that problem smarter than you with without the the tiring aspect I
50:15
mean how big of a problem is it I'm not saying we don't know we don't know how big of a problem it is yeah it might be
50:21
huge still and your point is 100% valid but but you give one step of of quote
50:27
unquote our intelligence maybe even in the AI sense right they this one way
50:33
they're just vastly Superior and they solve the power problem so fast right right well and I and I think that is a
50:40
that is a real human Dynamic is that we resist change mhm you know to where even
50:48
the power itself if something could be generated I was initially thinking of
50:53
solar yes that can be a portable source of power and yet I'm also thinking you
50:59
know okay we use fossil fuels right now but if something could be designed that
51:04
actually ran off oxygen like a human you know or you know hydrogen oxygen you
51:11
know something along that line as a source that would be that would that would fix that problem you know to some
51:19
degree and I think you know some of this information is out there you know of I
51:25
think the technology and some of these things is out there but you know we would have to REM mechanize so many
51:32
things to shift our cars and you know it's just some of I I'm I'm not a
51:38
conspiracy theorist but you know I do believe that some of that technology does get squatched to a degree you know
51:46
just because yeah it's not efficient well it's more efficient but you know it
51:53
it does away with the status quo and I think yeah that's another topic yeah you think
52:01
again tying it back in though with AI again quote unquote smarter than us
52:07
would go you know the emotional aspect of the political aspect of well we have
52:12
to go green we have to be electric now is it's kind of like uh well we put all
52:18
our eggs in this basket so you know so we have to we have to the AI would go that's silly yes let's do it this way
52:26
yes I don't know what this way is which might have certain aspects of this but might have certain aspects of the old
52:31
school the combustion engines around for a while till we made this so good that
52:37
it then becomes an obvious replacement or a hybrid if you if necessary whatever
52:43
right that's I mean we don't just don't know and and you take away the politics
52:48
because it's quote unquote smarter than us and doesn't need politics well I then it can choose the best thing based on
52:54
the best thing not based on politics yeah you know what I mean yeah and I I would dare say part of AI in some ways
53:02
would do away with Nostalgia because I think you know yes
53:07
it's a learning from if the AI cares about us or not and that's what his
53:12
whole second post is about is essentially what happens if it's a good good guy and what happens if it's a bad
53:18
guy right yeah and uh again just philosophy right but if it's a if it's a
53:24
benevolent robotic dictator and might go you know Nostalgia is good for these guys they need a little bit of that to
53:30
be the most productive they can be yeah you know mhm interesting philosophy as you say it
53:37
is yeah so and and here again it would come back to from my perspective can
53:43
human design do that yeah yeah or or is human design to the degree that it's
53:50
human flawed in some way right that's kind Bas on sin nature if I may that's kind of where you know you ask me what
53:57
the thing is I don't know I don't know of a thing except that right like to me that is that is a thing that is a thing
54:04
yeah and we're talking about you know basically leapfrogging human intelligence to where that sin nature
54:10
isn't the blocker and I'm saying yeah but we got to get through that one you're saying right like that's not a
54:16
non nonissue right yeah um yeah and that's where I would come back to that
54:23
altruistic Dynamic and yeah it's you know altruism you
54:32
know every all true right like right there's no such thing in the human
54:38
perspective because even when you think you're thinking altruistically you have a bent and why
54:44
is that bent important you know why is saving old ladies from getting hit when they cross the street important mhm why
54:52
well that just because mhm why mhm well I know why it's because God created us
54:59
to honor life and his creation and but if you take away God there's no it's
55:04
like well that person is close to death she you know probably will die in the
55:10
next decade mhm and she's consuming resources that could be leveraged in a
55:15
better way you know and she's blocking the the the cars that are that becomes
55:21
inefficient right sure and so the system that's supposed to work perfectly is now you know blocked by this imperfect old
55:29
thing that should go away you know right yeah no it's creating a parameter and therefore she's outside this parameter
55:36
and therefore her her value is diminished MH and I would argue that you
55:42
know is yeah and so altruism it's like what do you base that on if you don't
55:48
have a moral Center based on a god-given moral center right his Valu the reason I
55:54
think that's valuable is because he said it's valuable and he knows better than I do right uh that's how we operate right
56:01
sure but if you don't have that then it's like what how do you determine highest values MH you know it's going to
56:07
to me it's going to look a lot different than what we currently universally think
56:12
is a good value don't murder people don't murder old people don't murder well young people I guess we kind of any
56:19
people right yeah I mean apparently we're slipping up on that one right so we're already seeing some of that
56:24
progression of when we lose what you know when our values don't align with what God's values are we you know you
56:32
know sometimes we think well that's not real life right so you go into a fenus
56:37
stage and I guess I'm arguing that like the old lady crossing the street we're
56:43
not that far from that saying that's not valuable either you know I hear you but
56:48
yeah I don't want to get into that and there again I think some of that comes back to you know what is the Earth and
56:56
what what are people and if people are just biological beings right you know
57:04
that have no actual purpose right then yes there's there's a there's a formula
57:11
for that yeah and there's so much philosophy that's left out of this post in terms of why are we who we are mhm
57:19
sure you know yeah because that should tip tip you know tip tip some
57:26
information in our way about we're unique for a reason and there's something going on here that doesn't you know but it's Evolution well I guess
57:33
that's you know yeah well thank you for the uh the topic Justin you're welcome
57:39
it's uh yes it's been fun share with you my artificial intelligence I appreciate that I appreciate that yes indeed and uh
57:48
anything else you would like to add for a a wrap up it will be interesting to
57:55
see see how the AI sees it fair we'll have to interview it how
58:02
does and that's and that's how today that's today that's how we see
58:08
[Music] it hey thank you for listening to our
58:15
podcast if you like how I see it please do all the things that podcasts tell you
58:21
to do subscribe rate review follow us and and or talk nicely about us on
58:28
social media if you want to reach out the email is us how I see it. click yep
58:37
I said dot click as in c c k please tell
58:43
your friends about this show and we'll see you on the next one
58:49
[Music]