fbpx
OPEN TRANSCRIPT...

OPEN TRANSCRIPT

00:00
thanks for attending the disruption now
00:02
virtual summit
00:03
2020
00:06
this session is brought to you by fuse
00:09
by cardinal health
00:11
your panelists are will hayes
00:14
and doug mccullough this panel will be
00:17
moderated by lauren burke
00:21
and now the host of the disruption now
00:24
virtual summit rob richardson
00:27
welcome to the disruption now virtual
00:29
summit we are glad to have you here as
00:31
we talk about the human connection to
00:32
technology
00:33
hope everybody's doing well we have
00:35
people from all across the country so we
00:36
got my man
00:38
will hayes who's uh joining us from the
00:40
bay from
00:41
lucidworks how you doing man doing
00:42
really good really happy to be here
00:44
thanks for having me
00:45
uh thanks for being on doug mccullough
00:47
our man who's we've done a lot of events
00:48
together now
00:49
i think we're kind of becoming uh we're
00:51
kind of becoming pros at this thing how
00:52
you doing doug
00:55
doc could be here well there we go you
00:57
good to see everybody
00:59
yeah doug is the cio the city of dublin
01:03
uh one of the leaders of black tech 614
01:06
we appreciate all his partnership and
01:08
everything he does and last but
01:10
certainly not least
01:11
and the moderator for this event lauren
01:13
burke with
01:14
women in analytics how you doing good
01:16
how are you
01:18
hey living the dream living the i guess
01:21
the virtual dream because we can't do
01:22
this in person
01:23
uh but we're gonna make the most out of
01:25
this moment and uh
01:26
you know this time has allowed us to
01:28
connect with a lot with a lot more
01:29
people sometimes so
01:31
we're making the most of it and we're
01:33
learning as much as we can so
01:34
lauren i'm going to turn it over to you
01:36
and you get to take us an mcd event
01:39
all right thank you rob all right so
01:42
since this is the human connection to
01:44
technology
01:45
well this is something you're very close
01:46
to and you work close with can you
01:48
you start off by telling us the
01:49
difference between artificial
01:51
intelligence
01:52
and augmented intelligence yeah
01:54
absolutely i think it's a really good
01:55
concept for for people to understand
01:57
as we think about where we have you know
01:59
sort of good use of technology of ai of
02:03
machine learning to either enrich our
02:04
lives enrich our experiences or
02:06
where things can go a little sideways
02:08
and i think a perfect example is if you
02:10
think about what machine learning
02:11
actually is is it's a machine and one of
02:13
my favorite examples was a tweet that a
02:15
woman put out one time to amazon saying
02:17
hey amazon
02:18
um you know i i bought a toilet seat
02:20
because i replied i needed to replace a
02:21
toilet seat
02:22
um i'm not like an addict or anything
02:23
why do you keep advertising and pushing
02:25
me more promotions and telling me when
02:27
there's new toilet seats and
02:28
that's an example of where just
02:29
artificial intelligence just looks at
02:31
numbers and makes decisions so
02:32
our belief and with my company lucid
02:34
works what we try to encourage is how do
02:35
you intersect
02:36
humans in the middle to make those final
02:39
decisions to use that human intellect
02:40
that human instinct that intuition
02:42
that a machine's just never going to
02:44
have think about artificial intelligence
02:46
as ways to provide
02:47
more efficient choices to humans but i
02:50
do not think that artificial
02:51
intelligence should be making choices in
02:52
our daily lives that's where this idea
02:54
of augmented intelligence comes in where
02:55
we become much more efficient
02:57
but a human is always in the loop to
02:58
ensure that we're providing that
03:00
human experience on top of things first
03:03
and foremost
03:04
yeah absolutely that's incredibly
03:06
important especially when you're working
03:08
with technologies that touch large
03:10
amounts of people
03:11
doug so as the cio of the city of dublin
03:14
how are you
03:15
integrating humans into the process of
03:17
developing your technologies that affect
03:19
the citizens
03:22
well this is interesting you know
03:26
small communities not develop ground
03:29
technologies you know we tend to specify
03:32
them we purchase them from others and we
03:34
can figure them
03:36
so you know someone like will is the
03:38
person who's developing the technology
03:40
but i do want to introduce one concept i
03:42
sit on a work group with the mid-ohio
03:45
regional planning commission that is
03:46
developing
03:47
a survey and tool kits for local
03:50
governments
03:51
and people in my situation to try to
03:53
enhance what we can do
03:55
as we begin to delve into things like
03:57
machine learning
03:58
and artificial intelligence and quoting
04:01
the
04:02
human element or verifying and keeping
04:04
the human element
04:05
into our technology development and so
04:09
uh as will kind of has alluded to
04:12
our goal is to this is the unexciting
04:16
and unsexy part of artificial
04:18
intelligence which is
04:19
the slogging forward i thought it was
04:22
all sexy
04:23
[Music]
04:25
no well it's all sexy to me but i mean
04:29
to the audience but there's a lot of
04:30
spreadsheets involved
04:32
well i mean the thing is is that you've
04:35
got to
04:35
interject and intervene
04:40
the concepts of equity and and equality
04:44
into every part of the technical uh
04:47
rollout
04:47
and that is a slog it's just it's just a
04:50
pickaxe kind of hammer and chisel
04:52
working away like fred flintstone on on
04:55
these things and
04:56
we're just one small community which is
04:58
why we try to act as a region
05:00
in central ohio to answer these
05:02
questions for
05:03
uh many communities and not just one
05:06
because if one community does a
05:07
fantastic
05:08
job and nobody knows about it then it's
05:10
not going to help these things have to
05:12
scale
05:14
yeah and i think there's some there's
05:16
like when people
05:17
depending on who you talk to even some
05:19
people are scared to death when they
05:20
think about
05:21
artificial intelligence because i really
05:24
like the way will said it because people
05:25
see artificial intel some do
05:27
see artificial intelligence and saying
05:29
like okay are we replacing
05:31
humans the ultimate goal um i mean
05:35
i would say no i mean this this uh i'm
05:37
trying to remember the book i think it
05:38
was the second machine age
05:40
and it talked about the fact that okay
05:42
we do we now have
05:44
we now have a computer that that can out
05:46
that can beat
05:48
regular a regular one person a chess
05:50
master if it's just one chess master but
05:53
if you have a team that's actually just
05:55
knows how to use the computer
05:57
they were able to be a super computer
06:00
and chess masters because they knew how
06:01
to use the
06:03
they knew how to use computers in the
06:04
right way while also interjecting
06:06
human potential and human thinking so i
06:09
do think that's how we have to think
06:11
about it and that's how the process has
06:12
to be
06:13
and it shouldn't just scare us because i
06:14
tell look technology is coming
06:16
one way or another we got to figure out
06:18
how we use it
06:20
and make sure it has the most impact for
06:22
all involved absolutely
06:23
absolutely if i could tag in on that
06:25
because i'm really impressed with the
06:27
way that will
06:28
is talking about on minted intelligence
06:30
i think
06:31
of this in that way like computers can
06:35
calculate
06:36
in a way that's far and away more than
06:37
any human is going to be it's just raw
06:39
brute strength
06:40
but making decisions and having judgment
06:43
uh
06:43
and nuance is not that great was
06:46
a strength for them so when we think
06:48
about
06:50
i'm going to make decisions but when i
06:52
need to do
06:53
massive raw computation i'm going to
06:55
call on a tool
06:56
and that's what we need to think of
06:57
artificial intelligence as as just a
06:59
tool
07:00
and then we can do a lot of it and it's
07:02
a little bit less scary i think
07:03
and this is my point about presenting
07:05
choices not making decisions right
07:07
because a lot of
07:08
you know and i come from a data world
07:10
i'm a little biased but you know a lot
07:11
of what we're trying to accomplish with
07:13
with artificial intelligence is how do
07:15
we make sense of just massive amounts of
07:17
information and data and present it in a
07:18
way that's
07:19
you know readily available for rob to
07:21
take an action with
07:22
and so when you start thinking about
07:24
sort of all of the different
07:25
permutations and the dimensions that you
07:27
have to kind of start working through in
07:29
order to make those decisions obviously
07:30
this is where the machine algorithm
07:32
approach
07:32
makes the most sense but at the end of
07:34
the day it's about how do i distill that
07:36
massive amount of information down to
07:37
something that's consumable
07:39
so then a human being can actually
07:40
interject that choice because so the
07:42
point that you made doug i mean
07:43
the intuition the intellect the instinct
07:45
the emotion that we
07:46
we absorb in part of that interactive
07:48
experience is what makes us human
07:51
and and we're never going to achieve
07:52
that at least today with with technology
07:55
yeah absolutely it's incredibly result
07:58
driven
07:59
and a lot of times you really should be
08:01
considering who
08:02
is going to be using that at the end who
08:04
is your audience that you really want to
08:06
be
08:07
able to make that decision off of so
08:09
something we
08:10
hear a lot of in like algorithms around
08:12
machine learning and artificial
08:14
intelligence
08:15
is that the expression like garbage in
08:17
means garbage out
08:18
so when you're dealing with poor limited
08:20
or biased data sets
08:22
so like in some cases those bad data
08:24
sets can
08:25
result in a faulty product so like
08:28
for due to limited training data one
08:30
example is voice technologies often
08:32
misinterpret or have trouble
08:34
understanding like particular accents or
08:36
women's voices
08:38
and another that i know you've talked
08:39
about before will is chat bots and
08:41
recommendation systems so without
08:43
human intervention in those those can
08:45
have some disastrous results
08:46
and this goes back to how these things
08:48
are trained i mean rob and i had a fun
08:49
conversation about this on the podcast
08:51
where
08:51
you know a lot of these these disasters
08:53
that have occurred like cortana for
08:55
instance was because they were training
08:57
on conversations occurring on twitter
08:59
and so
09:00
you know you got to ask yourself like
09:01
have you ever been on the internet have
09:03
you ever gone through the internet
09:04
comments that is not necessarily a place
09:05
where you want to train why is that
09:06
important
09:07
because sentiment is how we determine
09:10
how
09:10
we should emotionally react to something
09:13
and when you start training a machine to
09:15
learn sentiment from conversations that
09:17
are happening in cyberspace with no
09:18
controls with complete anonymity with
09:20
complete toxicity
09:22
we start to associate sentiment around
09:23
gender sentiment around race
09:25
sentiment around socioeconomic status
09:27
something around whether you've been
09:29
involved in the justice system
09:30
and that starts to then impact the way
09:33
decisions are being made by these
09:34
systems now it's one thing if i'm trying
09:36
to promote a pair of tennis shoes it's
09:37
another thing if i'm trying to
09:38
determine a sentencing guideline and so
09:40
that's why when you think about garbage
09:42
in garbage out garbage in garbage out
09:43
could mean that
09:44
you know somebody promotes to me the the
09:46
next pair of high heels which has no
09:48
real functional value to me okay that's
09:50
garbage and garbage out you made a bad
09:51
decision i'm not going to transact
09:53
but it could also mean making a decision
09:55
about an individual based on the way
09:56
they look
09:57
based on the way they talk based on
09:59
their gender and having that
10:01
built inherently into the system making
10:04
these decisions this is not like a human
10:05
being having a bad day this is a machine
10:07
that has been trained
10:08
to understand bias and to apply bias in
10:10
his decision making
10:12
that is a critical mistake that we have
10:13
to avoid going forward and we've seen
10:15
plenty of examples
10:16
of folks who have just trained on these
10:18
data sets and unleash things to the
10:19
public and have very bad results
10:22
yep and i would say just to add to that
10:24
before you move on to doug i know you're
10:25
going to ask question there
10:27
it also leads to economic results like
10:29
it's of course it's we don't like it
10:30
being diverse people here we don't like
10:32
it when there are
10:34
negative implications for women when
10:36
there are negative
10:37
implications for black and brown people
10:39
it affects us in a personal way
10:40
but you talked about this on the podcast
10:42
too it goes beyond that like you know
10:44
when microsoft released the racist bot
10:46
that hurts your reputation and and
10:48
reputation travels real fast right now
10:50
on the internet
10:51
it's bad business right absolutely so
10:53
yeah garbage in
10:54
means you know not only having the right
10:56
data sets but i'm sure people didn't go
10:58
in and say oh
10:59
i want to create garbage in they just
11:01
didn't have the right people there to
11:03
say
11:03
like you said if you and i were there or
11:05
doug was there like wait wait wait wait
11:07
wait
11:07
you're going to get your information on
11:09
twitter like are you out of your mind
11:11
now now we're getting into the economic
11:13
advantage of having an inclusive team
11:15
because again that was the part that we
11:16
just yeah again like how people just sat
11:18
in a room and no one
11:19
thought this would be a bad idea clearly
11:21
was lacking representation
11:23
simple representation by the way if i
11:25
could
11:26
put a perspective in there before you
11:28
ask your next question lauren no
11:30
please one of my interpretations is that
11:32
uh you know microsoft did not release a
11:34
racist bot
11:35
microsoft released a reflective bot onto
11:38
a
11:39
racist data set right and
11:42
it's even better you know what i mean i
11:44
mean it reflects
11:46
what we do it is you know the technology
11:49
worked
11:50
perfectly right as exactly as designed
11:53
i understand why they did it and how
11:55
this happened as well and i'm not giving
11:57
them a pass or anybody a pass for doing
11:59
this
12:00
but you know the question implies is
12:02
also
12:03
the we're trained as data people that
12:06
the larger your data set
12:08
the more accurate and valuable your
12:10
results are going to be so what you
12:11
really want is the biggest data set you
12:13
can have
12:14
and if you're trying to train for
12:16
conversationality
12:17
twitter is a pretty good choice people
12:20
are having conversations on it
12:21
and the data is massive and so i think
12:25
in addition to the people conversation
12:27
we're probably about to have
12:28
uh we need to think about if you're just
12:30
talking about purely the data before you
12:32
got to the team
12:33
or anything like that uh we actually
12:36
need to be proactive and intentional
12:38
to create data sets that are not
12:41
racist because our actual behavior
12:46
is racist so our bots are going to be
12:49
right so if you want non-racist bots
12:52
we're
12:52
going to have to actually proactively
12:55
and intentionally
12:56
insert data so so mass
13:00
of data is actually less valuable than
13:03
than the accuracy and the target design
13:06
that we're going i think what i would
13:09
what i would i wouldn't counter but what
13:10
i would argue there though
13:11
not to not give i'm not trying to attack
13:13
and i'm also not giving a pass but you
13:15
know there's got to be an intentionality
13:17
behind it right because to your point
13:19
yes the the the data sets that you were
13:21
going to examine as part of these
13:23
exercises
13:24
you have to put thought behind those
13:25
things and i go back to just simply
13:27
representation
13:28
if you're if you're releasing a chat bot
13:29
with the intention of putting it out
13:31
there to the world to serve all of the
13:32
people
13:33
then you should probably be thinking
13:35
harder about the how much representation
13:37
is within those data sets so i do think
13:39
there's some accountability there i hear
13:40
what you're saying and yes i mean as a
13:42
as a technologist as a scientist you
13:43
know i i often
13:45
the same thing right you give me a
13:46
massive corpus of conversations
13:48
happening out there
13:49
in cyberspace and i can do a lot of
13:51
really cool and interesting things with
13:52
them
13:52
but again is my intention to do academic
13:55
research or is my intention to provide a
13:57
service
13:58
back out to the public and so there's
14:00
just again i think there needs to be a
14:01
discipline and an accountability
14:03
to how you think about that data set how
14:06
are you calling it how are you ensuring
14:07
that it's a representative data set
14:09
if that is what your intention is to go
14:11
do um so but i don't yeah but i
14:13
absolutely agree with you doug i mean
14:14
this is a byproduct of
14:16
of something else not necessarily it's
14:18
not the ball
14:19
it's not the bot's fault let's just say
14:20
that you know it's like how there's no
14:22
bad dogs right
14:23
right but the self-awareness part while
14:25
we're on this i think this is just a
14:26
good conversation here
14:27
and really important for us to learn
14:29
from because there's a there's two
14:30
lessons one
14:32
uh right thing to do but also
14:34
economically like your
14:35
reputational damage happens quickly and
14:38
it happens
14:39
fast and so understanding that even if
14:41
you don't believe in this which we i'm
14:43
sure
14:44
most of us are preaching to the choir we
14:45
believe in being inclusive we believe in
14:48
diversity you don't have to believe any
14:49
of that like you said we talked about
14:50
this you
14:51
it is good for your business because
14:53
here's what's going to happen right
14:54
you don't approach this right uh it will
14:57
affect you
14:58
and so there has to you and you have so
14:59
and you have to know i think we have to
15:01
accept
15:02
kind of the doug's point that you are
15:04
biased
15:05
and in order and and your default is
15:08
going to be
15:08
to do biased things unless you have
15:12
a process in place to check
15:15
your own bias and people have to
15:18
want to start from there the
15:21
acknowledgement that i am biased i'm
15:22
including myself in this we are all
15:24
biased and unless you and until you have
15:27
a process
15:29
that holds you accountable to not
15:30
letting your bias
15:32
affect and infect the data this will
15:34
happen
15:35
again and again and again that's kind of
15:37
my point
15:38
yeah absolutely i completely agree
15:41
because
15:42
as much as we we are talking about like
15:44
the responsibility aspect of it like
15:46
companies and individuals
15:48
no one's setting out to create a racist
15:50
or sexist algorithm
15:51
but at the same time while we know
15:53
that's not their intention
15:55
they also have to be intentional
15:56
themselves to sort of mitigate
15:58
that bias and that implicit like
16:02
bias that is causing harm for the users
16:04
of this because
16:06
if you don't have people asking those
16:08
questions along the way
16:10
it does come back and it is your
16:11
responsibility that this happened
16:13
and this has this poor result um and
16:16
the negative publicity i think is
16:18
helping to
16:20
get more policies and more processes in
16:22
place so people are getting ahead of
16:24
that
16:24
and ensuring that these sort of things
16:27
aren't happening over and over again and
16:29
aren't
16:29
affecting people in a negative way um so
16:32
something that we're like i think we're
16:33
getting at is this like disconnect
16:35
between the data
16:36
and the actual users so we know and it's
16:40
widely known that ai can perpetuate
16:42
forms of discrimination in ways that
16:44
disproportionately
16:45
affect the impacted populations whether
16:48
that's by race
16:49
sex gender religion anything like that
16:52
and it can cause ongoing economic and
16:54
societal problems for those individuals
16:56
so when have you seen the failure of
16:59
algorithms
17:00
that have gone wrong and why do you
17:02
think it's important that we
17:03
look into that further i'll just tag
17:06
something in there on that it's more
17:08
question than answer but
17:09
uh bias is human and and we shouldn't
17:12
demonize it
17:13
uh i i i'm gonna speak to one person
17:16
in a different way than i speak to
17:17
another person because of visual cues
17:20
that that's just how the human computer
17:22
works and uh we need to recognize this
17:26
the other thing is as a technologist
17:28
this
17:29
is not how we develop technology and
17:32
computer science as a discipline
17:34
did not really develop over the last 30
17:36
to 50 years
17:38
uh mechanisms for this human aspect
17:41
because this is this is new uh we we
17:44
have
17:44
a merging of technical capability and
17:49
scale
17:50
with human uh things that
17:53
we honestly need the disciplines that
17:55
we've been poo pooing for a long time
17:57
why are you going into art history why
17:59
are you going into ethics
18:00
when computer science is the field
18:05
to go i i i mention that also because
18:08
algorithms predate uh computers right so
18:11
computers scale that algorithm and
18:13
automate it but the algorithm itself
18:15
i would say something like redlining or
18:18
gerrymandering
18:19
or uh you know some of those
18:22
economic decisions that have been made
18:26
with pencil and paper for a long time
18:28
so i just want to make sure i throw that
18:29
in there that's a really
18:32
interesting point actually absolutely
18:35
yep and i think we kind of already
18:37
touched it obviously we made a couple of
18:38
examples about how the algorithms got it
18:40
wrong we went through the bots and
18:42
and you know we we want to make sure we
18:44
actually get it right
18:45
as for the two reasons we already really
18:47
discussed so this is why i think when
18:49
companies hear this and
18:50
you know why it's in their economic
18:52
interest to have diverse perspectives
18:54
you know one of my clients they have a
18:57
company that they just do
18:59
they help they help companies avoid
19:01
making stupid mistakes when it comes to
19:02
advertising you know that
19:04
that should be easy like there was one
19:05
example of uh
19:07
i can't remember the company but it was
19:08
a company with selling beer
19:10
and and so they they went and they had
19:13
the
19:13
it was a bunch of people of color in
19:15
this bar and they pushed past
19:17
all the people of color just to a white
19:19
person and everybody else
19:20
and they didn't see they didn't see a
19:22
problem with that so you know
19:24
it obviously if you can't see and that
19:25
seems super obvious to me like that's it
19:27
but like if people can't see that how
19:31
you can imagine how much of a challenge
19:32
they have when when they're looking at
19:34
analyzing data when they think is going
19:37
to be
19:37
objective they think is as as doug said
19:40
as long as we have more
19:41
uh we're going to be more accurate but
19:43
that's actually not true
19:45
and if you have people that have
19:46
different perspectives because it's not
19:48
and it's not just
19:49
technical aspects actually a lot of the
19:52
proof has been shown that people can
19:53
make
19:53
contributions to science that are like
19:56
tangibly there that actually don't have
19:58
the level of expertise that's why it's
19:59
important to have
20:00
multiple point of views we're not just
20:01
talking race but that's important too
20:03
but also perspective
20:05
absolutely and you know it's interesting
20:06
i mean i think we talked about this a
20:07
little bit before when it comes to ai
20:08
what i'm excited about
20:10
is that it the i don't want to call it
20:12
the bar but the requirement for
20:14
contribution comes down dramatically
20:16
right this is not no longer just
20:18
requiring your your data scientists it's
20:20
no longer just requiring your data
20:22
engineers and your phds and your
20:23
computer scientists i mean we need real
20:25
people
20:26
involved in training these algorithms
20:28
and analyzing these data sets and
20:30
ensuring that they make sense and so i
20:31
think there is a real opportunity
20:33
also back to what we said before i mean
20:35
you know i think we could have a whole
20:36
panel we could talk about
20:37
inclusion and diversity and equity and
20:39
why these things are important and i'm
20:41
sort of just tired of it
20:42
frankly but what what is actually
20:44
relevant here is back to the point that
20:45
we're talking about this is about
20:46
economics and having economic advantages
20:49
you may not
20:50
think that you know seeing the people
20:53
past the bar
20:53
is that big of a deal you may not even
20:55
know to even ask those questions to
20:56
lauren's point these people don't get
20:58
together
20:58
with the intention of let's go create a
20:59
really offensive advertising for our
21:01
beer like you know that's not the
21:03
that's not the objective but again we
21:05
can see right there where that lack of
21:07
inclusion
21:08
led to a mistake which has a negative
21:10
impact
21:11
so if there's anything else to take away
21:12
it's to look at your teams that are
21:14
making decisions on behalf of the
21:15
customers
21:16
that you serve because i guarantee you
21:18
almost everybody listening has an
21:20
inclusive set of customers or at least
21:21
an intention to serve an inclusive set
21:23
of customers
21:24
so if you are not representing those
21:26
customers in that room
21:27
you have a disadvantage and you need to
21:29
think about your strategies whether it's
21:30
around how you recruit how you build
21:32
teams how you promote your leadership
21:34
specifically for the goal of not missing
21:36
out on economic opportunity
21:38
if for no other reason moves you yeah i
21:40
agree
21:42
i just want to put a shout out at that
21:43
leadership level as well because
21:46
uh we have inclusion and a lot of
21:48
different things and
21:49
and just i don't know this commercial
21:50
that you're talking about but i can see
21:52
how
21:52
different uh concepts can can collide
21:56
so at one level you might have had
21:58
someone saying we need more people of
22:00
color
22:00
in our advertising yes yes and they put
22:04
them in there
22:05
and then another person in a different
22:06
part of the organization was designing
22:08
the storyboard for this other thing
22:10
but none of those people would be able
22:12
to kill this ad before it came out
22:14
that's going to take a vp or a senior vp
22:17
or something like that
22:18
and so you can have a level of inclusion
22:21
at the
22:22
at the employment level but we need
22:25
those bored people
22:26
you need those people that you you know
22:28
the good phrase is who thought this was
22:30
a good idea
22:31
now it passed a whole lot of people who
22:34
didn't have the ability to say
22:36
uh this doesn't look good and and so i i
22:39
just want to
22:40
re-emphasize will what you're saying
22:41
about especially at the leadership
22:43
level you need ceos you need board
22:46
members you need
22:47
senior level people who can say uh we're
22:50
gonna have a
22:51
serious conversation about this ad or
22:53
this algorithm
22:55
or this decision um because i've got the
22:57
authority to do it i'm safe
22:59
i'm not worried about it no
23:03
you guys get rid of me or anything like
23:05
that i'm here to do that and and so
23:08
that's important well the term i use is
23:09
empowered are there folks not only
23:12
at the table are they empowered right
23:14
are they empowered to have a voice
23:15
to to make a decision to to to veto
23:18
something and yeah so absolutely agree
23:20
yeah and i would say on that point
23:21
before i move on because there's a
23:23
couple things i'd like to talk about
23:24
um you know will you talked about the
23:27
technology and ai and actually lowering
23:30
the bar in terms of entry that that is
23:31
true
23:33
there's actually there was a stanford
23:36
actually did an online they opened up
23:38
they did this a while ago they opened up
23:40
artificial intelligence classes online
23:42
and
23:43
and 400 and i think the best student at
23:46
stanford
23:47
came at 400 out of that class so that
23:49
tells you okay
23:50
stanford is supposed to you know only
23:52
let in the best and brightest
23:54
there's a lot of people that certainly
23:55
have the potential that never took ai
23:57
before
23:58
and are able to understand it so i think
24:00
this is there's an opportunity for
24:02
companies if they're willing to be more
24:04
inclusive to have a lot of different
24:05
perspectives here and they have to look
24:07
at
24:07
they have to look at just not the
24:09
traditional way but
24:11
how the technology we have now is really
24:13
democratizing the process and i would
24:15
say that for individuals too
24:16
for individuals listen you know you
24:17
don't you don't have to you know
24:20
well no you didn't you didn't graduate
24:21
from college you don't have to go to
24:22
college necessarily
24:24
to have a skill set in this you just
24:26
have to be passionate and be open and
24:28
be open to learning and there are many
24:30
tools and opportunities to actually do
24:31
so
24:32
absolutely yeah the learning mindset is
24:34
is so key
24:36
yeah and absolutely like nowadays there
24:39
are so many
24:39
boot camps certificate programs even
24:42
particular companies like google
24:43
they are now having training programs
24:45
where you don't need a college degree
24:47
but they will train you in what you need
24:48
to
24:49
know to be successful in that
24:51
environment and i think
24:52
like ai and data science you see a lot
24:55
of diversity of backgrounds and a lot of
24:57
diversity of people which i think is why
24:59
it's growing so
24:59
fast and you are seeing a lot of
25:03
like new things coming out and um it's
25:05
definitely something that
25:06
like in the data science teams i've
25:08
worked on i can tell the like diversity
25:10
of perspective
25:11
and the diversity in the decision-making
25:13
process like doug has mentioned
25:15
is something that is incredibly
25:17
effective and
25:18
that's something i'd really like to
25:19
touch more on so
25:22
not only is it useful to have like
25:23
diverse data sets and diverse
25:25
perspectives but diversity in that
25:27
decision-making process
25:28
especially at high levels and that's
25:30
something that like companies venture
25:32
capital firms investors
25:34
they actually put themselves at it just
25:36
like distinct disadvantage by not
25:38
incorporating those diverse groups of
25:40
people in that process
25:41
so will i know that something you've
25:42
mentioned it's absolutely economical
25:45
to include people and it shouldn't just
25:47
be social responsibility it should be
25:49
in your mind that this is a positive to
25:52
your business and a positive to your
25:54
customers for you to be
25:56
more inclusive in that whole process and
25:58
a negative not to be which is also
26:00
i think important to state right right
26:02
it affects both both sides yeah
26:04
absolutely uh
26:07
if if i could comment on that as well um
26:10
i think we are
26:12
kind of predisposed to avoid friction
26:15
and when you have more diversity and
26:19
more creativity
26:20
and more uh opposing viewpoints
26:23
it really does feel like friction and i
26:26
think that especially in fast-moving
26:28
businesses and high-growth companies
26:30
we're inclined to say uh that's bad
26:34
like you guys have been arguing about
26:37
this for an hour
26:38
and that's bad yeah and as opposed to
26:40
the more creative companies that are in
26:42
organizations that are kind of like
26:44
this is awesome we're getting to
26:46
insights that we wouldn't
26:47
if we all agree and so i think that's
26:50
really critical because
26:51
especially in technical innovation it
26:54
truly requires
26:55
friction and if we've been in
26:59
environments that were never diverse
27:00
before
27:01
we're going to experience some
27:02
discomfort and
27:04
that's just a saying that all of us who
27:06
are trying to achieve diversity and
27:07
inclusion
27:09
have to come to terms with and augmented
27:12
uh uh artificial intelligence and
27:14
machine learning are no different from
27:15
any other type
27:16
and i want to touch i i use the word a
27:18
lot with even within my own company
27:20
about just the
27:21
the discomfort being uncomfortable and i
27:23
think it's something that we have to get
27:25
much better at as a society
27:26
and you know keep that safe space
27:28
mentality as well and i often tell
27:30
people they tend to misinterpret when we
27:32
talk about safety is comfort safety is
27:34
not comfort
27:35
in my mind safety is the ability for you
27:37
to show up as your whole self contribute
27:38
your whole self and we're not
27:40
putting you in a situation where you're
27:41
holding back because again that is a
27:43
disadvantage to me
27:44
as a business leader if i have resources
27:46
within the company that cannot show up
27:48
and fully engage
27:49
but i do not need you to be comfortable
27:51
in fact i want you to be uncomfortable
27:53
at times because that's when we're
27:54
growing that's when we're challenging
27:56
our assumptions
27:57
i call it breaking the bottle we got to
27:58
get out of this trap that we've sort of
28:00
set for ourselves and so
28:02
i think for organizations to start to
28:04
not only just when we're talking about
28:05
diversity and inclusion or ai
28:07
just bring that to your culture
28:10
celebrate discomfort get people just
28:12
used to it
28:13
that way we walk into situations that
28:15
are uncomfortable with the objective
28:16
that we're going to go find a
28:17
commonality and go find solutions
28:19
i think as we start to kind of avoid
28:20
these types of things because they feel
28:22
like friction they feel like conflict
28:23
we're just we're just stuck in the water
28:25
we're not going anywhere you've ever
28:26
been
28:27
white water rafting right when you're
28:29
pushing you're moving right so
28:31
it's definitely going to be hard at
28:32
times but that's when we make progress
28:34
i like word that comes to mind when
28:35
you're talking about that is really
28:37
relevant to this because it is that like
28:39
disruption
28:39
of your current environment and your
28:42
process and
28:43
i think part of that is knowing that it
28:46
doesn't
28:46
have to continue going the way it has
28:49
been and it's okay
28:50
to revisit a process revisit a system
28:53
and change what needs to be fixed
28:55
instead of just going with the flow when
28:57
you know that there's something that
28:58
needs to be
28:59
affected and done to it like effect
29:01
change in that way
29:03
yeah i mean there are several companies
29:04
that look at like all the most creative
29:06
companies as doug say they have
29:07
different versions of this right
29:09
i think uh jeff bezos talks about going
29:11
the day one where you're always in the
29:13
mode of changing changing building
29:15
and not going to the default of okay
29:17
accepting the process because this is
29:19
the process
29:19
of what we've always had and then you
29:21
get into the kodak situation and if i'm
29:23
sure most folks know where
29:24
kodak actually created the technology
29:26
that is instagram
29:28
and they said well it might disrupt our
29:29
entire business and you know they were
29:31
right and they no longer have a business
29:32
i mean that's how that works if you're
29:33
not willing to challenge yourself
29:36
and have what malcolm gladwell called a
29:38
constructive rivalry because you can you
29:40
have something where you are actually
29:41
you
29:41
encourage the environment to challenge
29:44
the system
29:45
in a constructive way but challenge the
29:46
system and actually empower people as
29:48
well says that that should be the
29:49
environment
29:50
that you are working in and i'll make a
29:52
guarantee to you if that's not the
29:54
environment that you're in
29:55
it's not going to if it's a startup it's
29:57
not going to last but if it's a company
29:58
it's probably not going to last either
30:00
absolutely yeah i um so i'd like to
30:03
bring it back to like will you
30:04
specifically
30:05
and what you're doing with search engine
30:07
so an example i have that i'd like to
30:09
bring up of how something went very
30:11
wrong
30:12
back in 2009 sophia umya noble she
30:15
searched for the terms
30:16
white girls on a search engine and came
30:19
up with a lot of nice stock images
30:20
and then she searched for the terms
30:22
black girls latina girls and asian girls
30:24
and each time the first page was filled
30:26
with pornography
30:27
so she wrote a book about this called
30:29
algorithms of oppression and she poses
30:31
the idea that search and
30:32
engine results are not nearly as neutral
30:35
as they may seem
30:36
so i want to ask you what people
30:38
commonly get wrong when they think about
30:39
how
30:40
search engines work and what are you
30:41
doing differently
30:43
yeah i mean well yeah okay let me answer
30:45
those separately so i mean
30:47
in the case of google and i'm fully
30:49
aware of the example and you know do the
30:51
same thing with with males as well look
30:53
up white teens look up black teens
30:55
you're going to see
30:56
you know kids playing in the park and
30:57
you're going to see you know
30:59
people being incarcerated literally this
31:02
one is
31:02
tricky when you look at it from the
31:04
perspective of let's just say
31:06
google and you know their algorithms
31:08
because
31:09
a lot of what informs ranking within
31:12
google
31:12
is again the internet and the linking
31:15
across the internet the way people
31:18
reference
31:19
women of color versus white women the
31:22
what they link to what they talk about
31:24
the volume
31:25
right and so what you're seeing there is
31:27
again a pretty
31:29
dark reflection of society and
31:33
the question i think is more so along
31:35
the lines of
31:36
what is the responsibility of somebody
31:38
like a google
31:39
to curate that sort of content and that
31:42
sort of behavior and you can imagine
31:44
that that opens up a whole can of worms
31:47
right because now it's interesting we
31:49
talk about sort of augmented
31:50
intelligence to prevent us from just
31:51
doing something stupid
31:53
but now when you're talking about sort
31:54
of shaping information
31:56
you're now actually having a human again
31:58
inject their bias into
32:00
what what is okay
32:03
what's not okay can i celebrate the
32:05
female body can i not i mean there's all
32:07
these questions that sort of come out of
32:08
this so
32:08
i just want to call out that that's a
32:10
very difficult question that i think
32:12
even as a society we're not necessarily
32:14
sure how to answer i don't know that i
32:16
can blame
32:17
google for using linking as part of
32:19
their algorithm it clearly points to
32:22
where the internet has decided that
32:24
these things are associated
32:25
right but we we probably have a lot of
32:27
discussion to determine
32:28
at which point is a provider responsible
32:31
for curating that and as they start to
32:33
curate does that even change the dna of
32:35
who they are
32:37
so meaning like am i no longer just this
32:39
search engine am i actually like a
32:41
content provider
32:42
a content stream am i you know am i uh
32:44
an editorial you know that type of thing
32:46
so it's something to think about
32:47
when it comes to us so so my company we
32:49
build um what we call sort of ai powered
32:52
search engines
32:53
we work with a number of top retail
32:55
brands we work with
32:56
large companies that are just trying to
32:58
use artificial intelligence and search
33:00
technology to help people
33:01
find information within the company if
33:03
you ever worked for a big business or
33:04
worked for a
33:05
a government like like doug you know
33:07
there's just massive amounts of
33:08
information and so we really help people
33:10
use the same approaches that google or
33:12
amazon uses to their experience
33:14
to understand their corporate
33:15
information because of that kind of to
33:17
what doug
33:17
said we have one advantage in that we
33:20
are typically looking at a very specific
33:22
corpus if i'm in retail
33:23
obviously everything i'm doing is in and
33:25
around that catalog and how i'm helping
33:27
people
33:27
engage that catalog more discover that
33:29
catalog more if i'm working within a
33:30
bank
33:31
it's all around financial services and
33:33
the data that they have
33:34
within the bank that proprietary data so
33:36
we don't get
33:37
exposed as much to some of these issues
33:40
that others might have
33:42
but we do often need to think about
33:44
again like the way we're making
33:45
assumptions about
33:46
ranking there was an interesting
33:48
discussion that went into my company
33:49
about we were we were using color
33:51
as a higher weighted um field basically
33:54
a piece of metadata
33:56
compared to another field and it got
33:57
into an interesting conversation that
33:59
is that a gender bias to assume that a
34:01
recommendation engine that color matters
34:04
is that just something that we just
34:05
assume because we happen to be looking
34:07
at
34:07
you know a women apparel store but the
34:09
moment we go somewhere else
34:10
does that go out the window so you see
34:12
that little thing this wasn't
34:13
necessarily about inclusion or exclusion
34:14
but it was one
34:15
sort of interpretive point that could be
34:18
seen as being biased that could take us
34:20
down a variety of paths which may or may
34:22
not be advantageous to our user
34:24
so there is some complexity that comes
34:25
into those those scenarios but i do have
34:27
the advantage of
34:28
we're not out there trying to consume
34:30
the internet and build recommendation
34:31
models and things where
34:32
again as doug was saying we're taking
34:34
trillions of records because that's how
34:36
we get a good inclusive data set
34:38
and we're allowing the the biases and
34:40
the sentiments within those records
34:42
to make decisions we don't really get
34:43
into that as much
34:45
yeah what a great problem though
34:47
speaking of friction
34:49
and controversy and oh that's a longer
34:51
conversation
34:52
because we're going to come up with
34:53
better products where that's concerned
34:55
and i agree with everything you've said
34:57
will thinking about you just sort of
35:00
remove google from that
35:01
and what do all of us have as a
35:03
responsibility to building
35:05
effective search and what are our actual
35:08
objectives
35:09
the original algorithm of google has to
35:12
do with
35:13
rankings and has to do with what is the
35:15
top because that's what the internet
35:17
says you know that based on everything
35:19
that's out there
35:20
and it's probably what you're looking
35:22
for and that's kind of good enough
35:24
but search now is
35:27
is more detailed and has
35:30
actual objectives and so forget about
35:33
google for a second
35:34
when you're searching for employees and
35:38
you're
35:39
you're using your network and you're
35:41
looking for candidates you're really
35:42
acting as a search engine
35:43
and when you are making decisions about
35:45
parole
35:46
for offenders you're you're kind of
35:49
acting
35:50
as an algorithm it's a much smaller data
35:53
set
35:53
and i do believe and this is
35:55
controversial will so i'll be
35:57
interested in what you think about this
35:59
i do believe we need to
36:01
insert some intentionality into
36:04
the objectives that we're trying to
36:06
achieve we have a universality
36:09
and a illusion of no
36:12
bias that says i'm not going to touch
36:15
the data
36:16
because that would bias it well data's
36:18
already biased and the results are
36:20
published
36:21
so now if we want to accomplish a
36:23
certain hiring
36:24
or fairness and sentencing or uh
36:27
fairness in giving loans out
36:29
we're going to have to insert objectives
36:32
and then we're going to need to
36:33
look at the data set that comes back and
36:35
iterate through it
36:36
and say well did this come back with the
36:38
kind of inclusion that we were looking
36:40
for if not we need to iterate
36:43
and that is controversial and and it
36:45
should be and and
36:46
i'm thankful that we're at this moment
36:48
right now where we're having an honest
36:49
conversation
36:51
about race and sex and sexism and and
36:54
and you know abuse and and these things
36:57
so that we can begin to say
36:59
wait a second this goes against values
37:01
that we used to have
37:03
but today's values we want to use our
37:06
technology to accomplish our societal
37:09
goals
37:09
and that makes this a legitimate thing
37:12
to move after
37:13
absolutely yeah i agree completely in
37:16
that we should
37:18
figure out ways to use data in a way
37:21
that's intentional i know
37:22
um china has some technology that they
37:26
are doing more
37:27
in terms of inserting in the process of
37:30
how the criminal justice system works
37:32
and and then
37:32
they're looking to see okay is this
37:35
prosecutor or judge
37:36
out of line in terms of how they're
37:38
sentencing with these cases and if they
37:40
are is there are there some corrections
37:41
that need to be made
37:42
uh i do think there are and there should
37:44
be opportunities
37:45
uh to use data i i think the opposite is
37:49
being done right now but i do think
37:50
we need to be more intentional the one
37:52
that comes really comes to mind is
37:54
facial recognition and how that's being
37:56
used
37:56
and there's a lot of challenges there
37:58
because they're you know it
38:00
facial recognition we know is very good
38:02
at pale mail
38:04
and it's but nothing else and so but but
38:06
still
38:08
uh you know people on the security side
38:10
want to use it to identify
38:11
people even though it's horrible at
38:13
identifying
38:14
anybody but white men for this sport but
38:16
they still want to use the technology
38:18
though it's not ready to be used in a
38:20
way unless we're ready to have the
38:21
conversation about intentionality so
38:23
i agree completely but i don't i'm sure
38:24
there's some disagreements on that but i
38:26
do
38:28
yeah i think it's a big thing right now
38:30
it's
38:31
really coming to light and people are
38:32
really invested
38:34
in making sure that companies and
38:36
individuals are using this data and
38:38
using these algorithms in an ethical way
38:40
one that comes to mind when we're
38:42
talking about like the ethics of how
38:44
you're using your data is facebook and
38:46
the election data
38:47
so whether you know it or not the things
38:50
that you're seeing based on
38:51
the data you consume and the data you
38:53
put out
38:54
can change your opinion and it can help
38:57
you form even more biases and it can
38:59
convince you of certain things and so i
39:02
think
39:03
like it's really good that it's coming
39:04
to light right now and people are
39:06
very invested in ensuring that our
39:08
algorithms and our data is being used in
39:10
the right way
39:12
and one person that i'd like to bring up
39:14
is joy la bolewini who founded the
39:16
algorithm
39:17
algorithmic justice league and she made
39:19
her mission with this
39:21
is to bring light to like the harm that
39:22
these discriminatory algorithms and ai
39:24
technologies can have on these affected
39:26
populations
39:28
and so i wanted to like talk about if
39:30
there's any other companies and
39:31
initiatives that
39:32
you know about that are taking like
39:34
these matters into their own hands and
39:36
like helping
39:36
address and mitigate this bias in data
39:39
algorithms and ai
39:40
yeah and if i can expand that question
39:42
too if you don't mind too i think to
39:43
to say what do you see as the future
39:48
of our relationship with data and how
39:51
how can we shape that i know well we've
39:53
talked about that but i think doug it's
39:54
a question that you've talked about a
39:56
little bit but i think
39:57
really expanding on that is it's a
40:00
really key concept so
40:01
i have one just point of view on this
40:03
that i think is going to come here
40:04
sooner rather than later i'm not aware
40:06
of as many organizations back to
40:08
lauren's question but i'd love to learn
40:10
and get to know them
40:11
um but but one of my my my sort of
40:13
theories for the future is is that we
40:15
are going to see a world where
40:17
we become far more transparent with in
40:19
terms of our privacy exchange
40:21
so if you think about it today we all
40:22
know this none of these things are free
40:23
i think everybody's kind of fully aware
40:25
of that we're using all these services
40:26
we're paying for them we're paying for
40:28
them with personal information
40:29
it's a commodity now i'm not here to
40:32
debate whether we should be doing that
40:33
or collecting personal information it's
40:34
it's it's a free country human being
40:36
should be able to
40:37
engage any way that they want but the
40:39
point is that today there's not a lot of
40:40
transparency
40:42
both in the sense that the companies
40:44
don't really know what they're going to
40:45
do they just know they want to go
40:46
collect it all
40:47
and i don't really understand the value
40:49
of what i'm exchanging with you
40:51
so i see a world in the future where
40:53
that exchange becomes far more
40:55
transparent almost like
40:56
any other market exchange that we see so
40:58
let's take facebook for an example with
41:00
the facial recognition and the facial
41:01
scanning
41:02
facebook might decide to say hey look
41:03
we're going to scan your pictures
41:04
because we're building a big
41:06
you know corpus or training set of faces
41:08
that we're going to go use for whatever
41:09
the heck we want to go use it for i
41:11
should be able to decide you know what
41:12
i'm not okay with that
41:14
opt out what is the exchange 50 bucks a
41:16
month
41:17
okay well i can make those decisions i
41:20
can start to make those trades with that
41:21
commodity far more transparently
41:23
so i don't love regulation when it comes
41:25
to technology i think it can get in the
41:26
way of innovation
41:28
but i do think regulating that the way
41:30
that my
41:31
information gets used and providing me
41:33
insight into that so i can make a
41:35
decision
41:36
as a consumer is something that i would
41:38
definitely like to see
41:39
some regulatory involvement in because
41:42
if anything else is just allowing me to
41:43
participate fairly
41:45
in this value exchange that we've been
41:47
having now for a decade and it's new and
41:49
it's moving quickly
41:50
and this is also why i think a lot of
41:51
mistakes occur because companies are
41:53
sitting here
41:53
worried that if i don't move
41:55
aggressively if i don't move quickly
41:56
back to rob's point i'm going to miss
41:58
this wave and i'm going to become
41:59
irrelevant
42:00
so i'm empathetic to some of those
42:02
challenges i'm also just again as a
42:04
technologist
42:05
i'm fascinated by what's possible with
42:07
what's out there
42:08
but definitely again opportunities now
42:11
to kind of think about
42:12
new ways of engaging in this digital
42:14
world and creating a fair world where
42:16
you don't have to be
42:17
a digital native you don't have to have
42:19
a deep understanding of privacy
42:21
and technology to use these services in
42:23
an informed way
42:26
yeah well we'll in in addition to being
42:28
a consumer you are also a citizen
42:30
and that's the thing people have
42:32
different roles
42:34
in society and we've seen the private
42:36
sector
42:37
has figured out how to really work hard
42:40
on
42:41
on that consumer part but government and
42:43
public sector
42:44
we we're way far behind in dealing with
42:46
that system but that's coming
42:48
and i think when you look at china you
42:50
you can see
42:51
that you know the relationship between
42:54
government and citizen and government
42:55
and individual is pretty important and
42:57
we need to get behind it so i want to
42:58
put a shout out here
42:59
to say we need some more doug's uh we
43:02
need to have conversations in government
43:04
uh we need to sit down and talk through
43:06
this uh it's
43:07
it's important let's have the fight
43:09
right um
43:11
so in dublin at the risk of saying even
43:13
more controversial stuff
43:14
i think about this a lot and we are
43:17
experimenting with a digital identity
43:20
product uh really system and platform
43:23
that is based on blockchain
43:25
so not going to go down the road and
43:26
talking all about blockchain and
43:28
everything
43:28
except for to say that we have the
43:31
opportunity today to develop
43:33
systems based on things that governments
43:36
and big corporations
43:37
cannot influence as much as we're
43:40
equalized
43:41
individuals are going to have some
43:43
influence where that's concerned
43:44
absolutely there's an opportunity for
43:46
transparency
43:47
and potentially in the world that i
43:50
envision the future world
43:52
we could have a situation where all of
43:54
the data that's being gathered
43:56
and observations that quite frankly this
43:59
is happening around us you don't have
44:01
control
44:02
over your data necessarily but we as a
44:05
government or
44:05
public sector should be in a position to
44:08
empower you as an individual
44:10
to see these things as they're gathered
44:12
and as you say will
44:14
opt out or exchange value sorry
44:17
and i don't know that you can have a
44:20
one-on-one relationship
44:22
with every company that has a value
44:24
stake in your data
44:26
but you might have a one-on-one
44:27
relationship with the community in which
44:29
you live
44:30
which might as a service have a way of
44:34
you
44:35
controlling or at least being able to
44:37
observe and have some transparency about
44:39
your data
44:40
certainly with regard to government
44:41
services so i do see a
44:43
a wider future at some point in which a
44:46
government
44:47
where you know taxes is not going to be
44:49
the way we necessarily fund ourselves
44:52
may have a value-added service that is
44:54
trusted
44:55
by people that i'm going to use this
44:57
service to interact
44:59
with a thousand different companies and
45:01
i'm going to know
45:02
when i'm being observed when when my
45:04
credit is being used
45:06
for a particular purpose and and it's
45:08
just going to
45:09
be different than what we think about
45:11
today and one last point i'll make about
45:14
this
45:14
i think this can be used as a tool
45:16
against bias
45:17
and against discrimination against
45:19
racism because it has the potential at
45:21
least
45:21
to equalize people's ability to do
45:24
something like
45:25
it might my access to a phone or a small
45:28
digital tool
45:29
is democratized it's not
45:32
i don't need a cray super computer in
45:34
order to
45:36
do this i should need a cloud-based
45:38
service
45:39
that i can contract with for pennies a
45:41
month right so that's the future i
45:43
yeah i love it oh that's a great answer
45:45
those are great answers that
45:47
i didn't i didn't think about it from
45:48
the government point of view but that's
45:49
a really important point
45:50
that you can kind of be the intermediary
45:53
in between
45:54
you know and to make sure that that
45:56
happens because it's not gonna it's real
45:57
i see it difficult to try to figure out
45:59
well how do you do that with
46:00
every single person that's trying to get
46:01
data that's fascinating and i think it's
46:03
really innovative that's a
46:04
really good thought process i never
46:05
thought about that way yeah i really
46:07
appreciate the
46:08
idea of transparency and that you're
46:11
bringing that
46:12
up so often and including that in like
46:14
your thought process for this
46:16
one of the things that like really
46:17
struck me when i was first learning
46:18
about data ethics was this idea of like
46:21
the unequal exchange of information
46:23
so back before like back in the day
46:25
before internet before computers before
46:26
we all had cell phones
46:28
your neighbor down the street you know
46:29
as much about them as they know about
46:31
you
46:32
in the case with these large companies
46:33
that have your information it's an
46:35
unequal exchange you're giving them your
46:37
information
46:38
and you aren't getting information back
46:40
in return so i like this idea of the
46:42
transparency in this more equal exchange
46:44
in terms of that and especially if
46:46
government can
46:47
help provide that for the specific
46:49
communities i think that's a great place
46:51
that we're moving toward
46:53
yep i agree absolutely um another
46:56
go ahead go ahead i was gonna say you
46:58
know what else is funny is that
47:00
a lot of this is not data that you gave
47:02
them or that even exists
47:04
the algorithms are so strong now that
47:06
their predictive capabilities
47:08
they can tell you my political
47:11
persuasion although i may never have
47:14
shared that they can tell you my sexual
47:16
preference they can tell you
47:17
all kinds of things that they can derive
47:20
based on
47:21
really strong analytics so you know it's
47:24
it's funny we
47:25
we assume that there's some data set
47:28
there that's completely accurate
47:30
they can do marketing on the basis of a
47:32
prediction algorithm that may or may not
47:34
be accurate
47:35
but they can still make a lot of money
47:36
on it and that's that's going to get
47:38
really weird because i can stop you from
47:40
using my data
47:41
but i don't know that i can stop you
47:43
from predicting how i'll do things just
47:45
because i'm a
47:46
x year old black man who lives in
47:48
reynolds
47:50
well this is i mean this is this is
47:51
exactly and and i think there's a
47:53
there's a concept here when it comes to
47:55
again how we explore
47:56
the call it the propensity i won't even
47:58
say bias the propensity for something
48:00
bad to happen
48:01
and it's what we call in my business
48:03
it's called signal data
48:04
right so to your point doug there is
48:06
data that is like explicit that you know
48:08
i request something i download something
48:11
then there's all this telemetry all
48:12
these signals that are out there and
48:14
part of you know what you do to build
48:15
good recommendation algorithms is you
48:17
understand okay can i aggregate a
48:19
variety of signals
48:20
to determine the next best action for an
48:22
individual so i may never have seen you
48:24
before
48:24
but the way you show up to my site the
48:26
way you move your mouse the things that
48:28
you look at the longest is all sort of
48:30
informing me
48:30
so that's a very you know kind of a cool
48:33
way to
48:34
build these predictive models but it's
48:36
also an opportunity for us to look at
48:38
each individual signal that we're using
48:41
to make these decisions to determine
48:42
bias i used a very basic example for my
48:44
company and again we weren't worried
48:46
about
48:46
offending anybody we were worried about
48:48
not having the most effective algorithm
48:49
for conversion
48:50
and that was whether or not we're going
48:52
to take color into consideration
48:54
because if you think about it in a
48:55
shopping ecommerce context when you're
48:57
building a recommendation model
48:59
color can be incredibly important it can
49:01
become irrelevant
49:02
very quickly and how we weight that
49:05
field will
49:06
create bias within that model so just
49:08
looking at that one signal color we can
49:10
have a conversation around whether this
49:11
can have a negative impact
49:13
we should be doing this around all of
49:15
the signals now when you get into things
49:16
like deep learning and vectors and where
49:18
we the
49:19
computer now has gotten not only good at
49:21
these performing these operations but
49:23
actually determining where new signals
49:25
might occur
49:26
and pulling those in we also have an
49:28
additional challenge but again you can
49:30
go back to
49:31
what is the the unit that we are trying
49:33
to make sense of and what's our
49:34
intention
49:35
and have it have just a healthy
49:36
conversation about what could go good or
49:38
bad as a result of that
49:39
and i think that could alleviate or at
49:41
least prevent
49:42
a lot of these mistakes from occurring i
49:45
mean that's such an important point that
49:46
i want to touch on
49:47
uh lauren you and i talked about this
49:49
before in the prep that
49:50
uh netflix got in trouble with just this
49:52
right they went
49:53
and they they were they were they were
49:55
doing exactly what you said well
49:57
uh signaling and looking at that and
49:59
then making recommendations for what
50:01
people want to see they still do this
50:03
now well you had one woman who was from
50:05
a very conservative family and she was
50:07
catholic and
50:08
didn't want didn't want necessarily
50:09
everybody to know that she was lgbt
50:11
right and they but they they outed her
50:13
they said oh look we know for the lgbt
50:15
and they came up
50:16
and they looked and then you know our
50:17
family looks at this like wow like how
50:19
did you know that and
50:20
because of what you talked about but i
50:21
think there has to be the conversation
50:24
not only can we do this should we do
50:27
this
50:28
and if we're going to do this how do we
50:30
do this in an intentional way to cause
50:32
the least amount of harm as we have more
50:33
and more conversations about technology
50:36
that conversation always has to be there
50:38
because if not you'll have
50:40
more consequences than that but that's
50:41
one obvious obvious obvious example of
50:44
how things can go wrong and you cause
50:46
harm and that was likely unintentional
50:48
but again they were just following
50:49
following the signals and there's no
50:51
but there's no it but there's no process
50:53
in place to look at should we do this
50:55
what are the harmful effects and how are
50:57
we how will we account for that
50:59
uh that you can run across issues like
51:01
that and i think we will continue to run
51:03
across challenges like that as we go
51:04
along
51:05
as we can as we really talk about the
51:06
ethics with this tough with
51:08
with with the technology too that has to
51:10
be part of the conversation
51:11
um so the big deal with this uh with
51:14
netflix and it's kind of scary is that
51:16
it and it ended up in a lawsuit was that
51:18
um
51:19
they had this public contest and they
51:22
set out this public anonymized data set
51:24
and so they removed what they thought
51:26
was any personally identifying
51:28
information
51:28
and from this researchers were still
51:30
able to identify specific individuals
51:33
just from their history on netflix and
51:36
what we don't think is any identifying
51:38
information like our name our zip code
51:40
where we live things like that
51:42
and that's what is so scary because your
51:45
personal data like
51:46
even if you're not giving specifics
51:48
about yourself
51:50
there's so much that can be found out
51:52
and identified
51:53
to you um based on just like what you're
51:56
talking about
51:56
well like how you go about certain
51:58
things and how you interact with
52:00
different tools so for both of you
52:02
do you have any like processes or
52:03
systems in place that you
52:06
to help you ensure that these like pro
52:08
that your data and
52:09
your technologies are unbiased
52:14
well i'm curious to hear doug's answer
52:15
from our perspective because we serve
52:17
clients we're not doing our own data
52:19
capture our own data collection so we're
52:21
typically working with them as part of
52:23
their strategy
52:24
um and so most you know retailers um
52:27
yeah we do work with some governments
52:29
some healthcare
52:30
and we'll partner with part of their
52:31
privacy strategies and their data
52:33
strategies to determine
52:34
you know what is the appropriate level
52:36
of information that we want to process
52:38
really curious to hear how doug thinks
52:40
about this but before doug goes to that
52:42
let's do you have a pro but do you have
52:44
when you think about how you approach it
52:46
with them do you do you
52:47
have a certain process to make them
52:49
think about how to view this in a way so
52:51
you don't get the
52:52
you know you don't get the you know have
52:53
to pick on microsoft or whatever you
52:55
don't have a microsoft example like
52:56
is there a process you may take your
52:58
clients through to consider as they are
53:00
collecting and going through the process
53:02
you got to remember we're playing a
53:03
pretty small piece in the overall
53:05
picture right so most of these brands
53:06
and these retailers we we
53:08
dealers we work with are typically at a
53:09
minimum of about a billion dollars a
53:10
year in revenue so they're very large
53:12
they're omni-channel
53:14
and so we're coming in to help you know
53:16
with with a piece of their
53:17
call it data technical strategy for
53:19
their primarily their digital properties
53:21
right we are seeing some people bring
53:22
these into the stores and
53:23
and those kinds of things and when those
53:25
in those particular situations you know
53:26
we don't
53:27
impact a whole lot we're not deciding
53:30
you know visual treatments we're not
53:31
deciding who the
53:32
stock photos should picture we're not
53:34
deciding how you know inclusion gets
53:36
represented in those situations
53:37
what we're here to do is to help you
53:39
effectively sell more right so we're
53:41
helping you convert more
53:42
customers by getting them information
53:43
that they want we're empowering your
53:45
merchandisers to make those human
53:47
decisions to decide hey
53:48
the recommendation engine says this but
53:50
you know what if a person is looking at
53:51
these hiking boots and looking at this
53:53
pull i guarantee you i can sell them
53:54
this hat
53:55
you know that's something that like a
53:56
human being who has some domain
53:58
understanding and who's a hiker can make
54:00
those decisions right
54:01
so i just i bring that up not to say
54:03
that to give ourselves any excuses we're
54:05
still responsible for the decisions that
54:06
we make
54:07
it's just typically where we come in a
54:10
lot of that has already been
54:11
established on the other hand side of
54:13
the house for us like i said we work
54:14
internally with companies to help with
54:16
their own information
54:17
there i think there's a lot less risk
54:19
because what we're looking at there is a
54:21
lot more just domain specific
54:23
information if you're an oil and gas
54:24
company we're classifying
54:25
your maps and your research and we're
54:27
doing some image processing and we're
54:28
making that stuff retrievable
54:30
um typically does not fall into the the
54:33
the bucket of where you know some human
54:35
biases can
54:36
impact sort of the the social experience
54:39
that i have
54:40
if anything it just makes me less
54:41
effective at my job when you fail to
54:43
surface information that i'm looking for
54:47
yeah and and so i i started it out today
54:50
by talking about how
54:52
typically in a small local government we
54:54
are not big
54:55
contributors to new technologies like we
54:58
don't build it
54:59
we tend to work with an innovator like
55:01
will's company
55:02
um but that being said we are doing that
55:04
when it comes to this blockchain
55:05
experiment
55:06
and this is what we think is the wave of
55:09
the future so
55:11
we uh grew up in a way that we did not
55:14
have
55:15
specific methods to prevent that and now
55:17
that data is exploding and our
55:19
information about
55:20
citizens is exploding we really need a
55:22
new platform a new
55:23
way of thinking about it and that is the
55:25
hope for that technology
55:27
that being said we also take a similar
55:29
philosophy to kind of what will is
55:31
saying
55:32
we did a machine learning exercise in
55:34
some data that we were not really
55:36
allowed to look at
55:37
and examine uh we have some tax data
55:41
that
55:42
the majority maybe two or three people
55:44
in the city are really
55:45
allowed by law to even be reviewing and
55:48
but we wanted to do some predictive
55:49
analytics on it
55:50
what we managed to figure out what to do
55:52
is to take our algorithm
55:54
our machine learning model and put it on
55:56
the inside of that firewall
55:58
so i don't need to see it and i i'm
55:59
really excited about this potential
56:02
that our our privacy involves people
56:06
looking at someone's information in
56:08
using it for a nefarious purpose
56:09
but if we can take the technology and
56:11
place it within the data set
56:13
and generate the uh insights and
56:17
and those kinds of things they become
56:19
useful
56:20
while still being anonymous and so i
56:22
want to continue thinking and working
56:24
about that as well
56:25
the the third thing that i want to make
56:27
sure i also capture here
56:29
is we often get into talking about the
56:31
dark side and the negativity
56:33
about how these analyses and these
56:35
insights they just seem to emerge
56:37
because this technology is so efficient
56:40
at being able to now tell all sorts of
56:42
things about people
56:43
there are positive things that we could
56:45
do with this
56:46
while we're trying to protect ourselves
56:48
and i'm going to say something
56:49
controversial about china now
56:51
and their social score which we all
56:54
appropriately hate
56:56
and we think it's a terrible invasion of
56:58
privacy that
56:59
you know people could get a social score
57:00
on some of these basis but i've also
57:02
seen other perspectives talking about
57:04
how
57:05
you know if you're the unbanked and you
57:08
do not have a credit score
57:09
you're not in a position to take
57:11
advantage of other signal world
57:13
that other people have like a bank
57:15
account like owning property like a
57:16
driving record
57:18
like those kinds of things um and so
57:21
without i shouldn't even talk about
57:23
china but without going down in that
57:25
direction
57:26
how can some of these signals be used by
57:28
people
57:29
in a disadvantaged situation to require
57:33
credit
57:34
to get a bank account to get a better
57:37
score to get
57:38
uh some credibility in getting a job and
57:40
those kinds of things if those signals
57:42
could indicate a certain amount of
57:44
reliability and trustworthiness
57:46
you know we should continue thinking
57:49
about
57:50
how some of this data could be used for
57:52
the least of these
57:53
to build a profile about themselves that
57:56
they don't have
57:57
the 900 smartphone and the
58:00
the location tracking that would
58:03
otherwise build that profile for them
58:05
yeah i
58:05
actually completely agree with that is
58:06
you you sparked some thoughts in my head
58:08
one
58:09
um you really can't opt out of the
58:11
system anyway i tell people this like
58:12
look we gotta
58:14
you can try if you're opting out of
58:16
technology you're opting out of a whole
58:17
bunch of opportunities you're opting out
58:19
of
58:19
really powerful things so like yeah so
58:21
like you're you're you yeah your
58:23
society thank you there's a better way
58:24
to stay you're opting out of society
58:25
essentially so
58:26
it's not to think you can just be
58:28
anonymous it's probably not going to
58:30
work the better
58:31
the better idea is like like you know i
58:33
know there's a lot of worry about facial
58:34
recognition
58:35
frankly the more inputs they get of
58:37
people of color the more accurate it
58:38
becomes that's one example so i know
58:40
that's kind of controversial but
58:42
it's not like they're going to stop
58:43
using facial technology it's not going
58:44
to happen so
58:45
what we have to do is figure out how we
58:46
inject ourselves in the process not
58:47
remove ourselves from the process
58:49
because that is not going to do anything
58:51
except
58:51
removing from society i think that's a
58:53
good example uh beyond that though
58:55
there's also
58:56
you know there are there are great
58:57
opportunities to figure out how to use
58:59
data
59:00
to empower people and we've seen
59:02
examples of how that can play
59:04
out so i i do think there are a way
59:06
there are there are ways to really look
59:08
at this to say well how can we use data
59:10
to make more accurate decisions in
59:11
health how can we use data in a way that
59:14
makes better decisions with our criminal
59:15
justice system
59:17
we know that these things can and do
59:19
work but i know there is a resistance
59:21
and i understand the resistance
59:23
uh from people wanting to worry about
59:25
their data i tell people
59:27
you really don't have any privacy right
59:28
now so the issue is
59:30
i mean if i'm really honest with you the
59:31
issue is how do we want to better affect
59:34
society moving forward yeah and how can
59:37
we help people
59:39
and how can we use this data to help
59:40
more people because at the end of the
59:42
day
59:43
when people are crying about their
59:45
freedoms and things like that
59:46
as they do it on facebook i got news for
59:48
you we know already so like
59:50
it's like from a cell phone talking
59:52
about how vaccines are going to track
59:54
you
59:54
um i will say i mean one of my one of my
59:56
favorite projects
59:57
is a um is the matahari project out of
59:59
india and this was um
60:01
a big data project that was all around
60:03
identity you know india obviously had a
60:05
massive population
60:07
people living in various situations and
60:09
you know a lot of folks who are who are
60:11
needing
60:12
help are hard to identify so part of
60:14
this project was could we create a
60:15
system to provide
60:17
effectively like a a digital identity
60:19
for every single citizen so we now
60:22
understand the demographic of this you
60:24
know multi-billion person country
60:26
and you know india has about 60 billion
60:28
dollars of social programs that go in
60:30
every single year but not a lot of
60:32
visibility into where they need to be
60:34
distributed the most so the result of
60:35
this project that having all these
60:36
signals and having all this information
60:38
was really to help them make much more
60:39
informed decisions
60:40
on where they could invest these social
60:42
programs so i love things like that
60:44
i love hearing doug talk about you know
60:45
signals i think about this in hiring all
60:47
the time right when we think about
60:49
you know if i were to hire somebody off
60:50
of a resume off a paper off of where
60:52
they went to school
60:53
one you can make a lot of really bad
60:54
hires that way but two you miss out on a
60:56
lot of things so when i'm looking at
60:58
somebody's background there's a lot of
60:59
signals that i look for
61:00
overcoming adversity you know if you
61:02
were the first person in your
61:04
your family to go to college you came
61:05
from a single family household you had
61:07
to work to put yourself through college
61:08
but you got that degree
61:09
then you got your first job then you put
61:11
your bat yourself back into night school
61:12
to get your mba
61:13
like that is grit determination forget
61:16
about you know whether or not i'm you
61:18
know feeling
61:19
like your story is pulling on my
61:20
heartstrings i see an individual who is
61:22
performant
61:23
who is serially successful and so again
61:26
this is where i just
61:26
when doug was talking about credit it
61:28
made me think okay if that's how i think
61:29
about hiring
61:31
what makes you decide whether somebody's
61:32
fiscally responsible
61:34
it's not just whether or not they're in
61:35
debt as we know debt can sneak up on you
61:37
from a million
61:38
different ways for a million different
61:39
reasons life and death sometimes health
61:41
right
61:41
but there are plenty of other things
61:43
that we can interpret so i think it's
61:45
it's giving me a lot of encouragement in
61:46
hearing this conversation and thinking
61:48
about what's possible obviously we talk
61:49
about some of these pitfalls but there's
61:51
a lot of awareness
61:52
of these pitfalls as well i think lauren
61:53
alluded to that as well so it's an
61:55
exciting time and
61:56
participation is key and i i would say
61:59
as to your
62:00
the thought i missed that missed me that
62:01
i wanted to talk about is one of our
62:03
speakers one of our partners solo funds
62:05
is doing actually just that
62:07
was it was founded by rodney williams
62:09
and travis holloway they got a lot of
62:11
their funding actually here initially in
62:12
cincinnati
62:13
but solo funds give theirs they're
62:15
essentially disrupting the payday loan
62:17
industry
62:18
and they give loans for i think under a
62:20
thousand dollars
62:22
and they help re-evaluate looking at
62:24
different signals for
62:25
what a credit worthy borrower is and
62:28
they help
62:28
they let lenders and borrowers set their
62:31
own terms
62:32
and the i mean the only thing they they
62:34
they do is they make sure that
62:36
you don't set a i think above 10 or
62:38
something like that but beyond that you
62:40
can set your own terms
62:41
and they also i think ensure a lot of
62:43
the borrowers up to 90 percent of their
62:44
loans and they've
62:45
they've had really really they have
62:46
really low default rates and they've
62:48
been able to help and what they're
62:49
trying to do is build another
62:51
kind of digital footprint for what what
62:53
a credit score could be
62:55
and then hopefully take that to uh
62:57
larger banks but i think they they're
62:58
scheduled to do about 20 million in
63:00
loans this year
63:01
because they kind of picked up with
63:02
kovitz so um so it's it's so that is
63:05
that is already happening solo funds you
63:06
could
63:07
you you've heard from travis holloway
63:08
earlier today if you saw our
63:10
our our our family feud event you're
63:12
going to get a chance to network with
63:12
him later
63:13
so these things are being done and i
63:14
want to encourage people this is why
63:16
you know travis started and you know had
63:19
no had
63:19
he didn't have a bunch of money he just
63:20
had a back he had a background in
63:22
finance they went out and they were able
63:23
to do this so
63:24
there are opportunities and i know
63:26
lauren you're about to get to this as we
63:28
get ready to wrap up to talk about where
63:31
you and i'll just ask this question
63:33
where do you where would you advise
63:35
people that are maybe
63:36
just getting started in business or want
63:39
to know more about
63:40
ai and data what are some resources
63:43
articles
63:45
books that might be just some good
63:46
guidance and a place for them to start
63:48
as they want to kind of learn and and
63:50
develop in this process
63:52
i mean from my perspective i'm trying to
63:54
remember how i learned i mean you know
63:56
uh i i would say i am an advocate
63:59
for uh going back to basics
64:03
and as i mentioned earlier that
64:05
algorithms predate
64:06
computers right i mean the actuarial
64:08
tables of the insurance
64:10
industry you know those are algorithms
64:13
right algorithms are
64:14
on on a whiteboard you can write one out
64:17
you need to understand that power and
64:20
the computer
64:21
and computer science and the scripting
64:23
of it and the language
64:25
becomes just a tool and and sometimes we
64:28
raise up
64:28
our computer tools our computing tools
64:31
to a level that they don't belong there
64:33
we need to understand the concepts of
64:35
the power of
64:36
compound interest for example is an
64:39
algorithm right
64:40
and you need to understand that on
64:41
pencil and paper so i do recommend that
64:43
people study economics and study
64:45
uh you know kind of social sciences and
64:47
those kinds of things
64:48
and then uh learn some aspect of
64:51
technology
64:52
you don't need to be a master at it but
64:54
you should learn a language or to
64:57
learn something about scripting and
64:58
databases and
65:00
and and play around with it there are
65:02
tools that you can download or even use
65:04
in the cloud
65:05
or you can just try a couple of scripts
65:07
out and get a feel for it
65:09
you may take that further or you may
65:10
have learned enough at that point so
65:13
everybody does not need to be a computer
65:14
scientist but that's enough for you to
65:16
start having a
65:17
conversation and understand enough about
65:20
oh i can see how if my assumptions were
65:23
abc
65:24
this algorithm is just going to take off
65:26
and it's gone forever
65:27
so that's the approach i would take
65:30
i want to just double down on this not
65:32
everybody has to be a computer scientist
65:34
i mean i get
65:34
uncomfortable with this sort of pressure
65:36
around stem especially into our
65:38
communities
65:39
um you know it's important and i'm glad
65:41
we're raising awareness but we also need
65:43
to understand that
65:44
we need we need all kinds of backgrounds
65:46
we need all kinds of input and to the
65:47
points that we've been making around how
65:48
artificial intelligence is in some ways
65:50
lowering the bar of contribution because
65:52
you don't have to be
65:53
a phd i think it actually increases the
65:56
demand
65:57
for those art history majors for those
66:00
philosophy majors for people who have
66:01
different ways of thinking that can
66:03
inject
66:04
that thinking into how we're doing this
66:05
so while i completely completely
66:08
subscribe to the idea that everyone
66:09
should conceptually
66:11
understand how software works how
66:14
systems
66:14
work now the implementation detail to
66:16
doug's point whether you know the
66:17
language whether you know the procedure
66:19
to me that's a question of whether or
66:21
not you find that interesting and if you
66:23
do then by all means go all in and i
66:25
will i will champion and support you but
66:26
it is by no means a requirement for
66:28
success in this new world that we're in
66:30
in the society that we're in and we need
66:32
people from all walks of life with
66:34
different ways of thinking contributing
66:36
to this especially now
66:37
this isn't 20 years ago when we were
66:39
just trying to get the first things
66:40
online
66:41
right this is the time now where that
66:42
human intuition that that constructive
66:44
thought
66:45
is just critical so i celebrate more
66:47
diverse backgrounds coming into the
66:48
industry
66:49
and encourage people to find things
66:50
you're passionate about first and
66:52
foremost
66:53
you know stem is just one way to
66:54
contribute i'll close on this but
66:56
there's a workshop i love to run with
66:57
young kids where i bring them in
66:58
and we go through we bring a product to
67:00
market and we from conception
67:02
to how we're going to package it to how
67:04
we're going to build it to how we're
67:05
going to put it into like a best buy
67:06
store it's usually like a video game
67:07
console or a music player
67:08
and my whole purpose of that exercise is
67:10
when we get done we look at it and we
67:11
say you know what
67:12
we spent like five minutes talking about
67:14
what we were going to build
67:15
that was such a little piece of this
67:17
whole thing we talked about promotion we
67:19
talked about selling we picked somebody
67:21
in the team who was going to go talk to
67:22
best buy and make sure that we got that
67:24
great
67:26
setting that we wanted and so it's just
67:28
helping people understand that there is
67:30
so much contribution and so many
67:32
different backgrounds that come
67:33
into this and we should be celebrating
67:35
that and encouraging them
67:38
as we get ready to go i'll just i'll do
67:40
something about
67:43
you told me is like you talked about how
67:45
you look at problems like what
67:46
what problem are you trying to solve and
67:48
what signals relate to that problem
67:50
and how will people be motivated by that
67:51
and i look at it like if you're trying
67:53
to solve
67:53
if you know a problem you're trying to
67:54
solve think of it that way and think of
67:57
it from that level that instinctive
67:58
level
67:59
and not necessarily the technical level
68:01
because i think people get lost in the
68:02
technical and lose sight of the big
68:03
picture so
68:04
you got to do both absolutely so i think
68:07
that's good i know we're wrapping up
68:08
here let me just say uh
68:10
your questions and comments we're going
68:11
to take i think lauren will be taking
68:13
your questions
68:13
you get a chance to interact with both
68:15
doug and will but for everybody we
68:17
appreciate your time
68:18
and check out the podcast disruption now
68:20
as we continue to disrupt common
68:22
narratives and constructs i want to
68:23
thank
68:23
this whole panel for coming you guys
68:24
were awesome thank you really great
68:26
content
68:26
tons of fun thanks a lot thank you see
68:29
you later

HOSTED BY

ROB RICHARDSON

Share This!

"A leader in change."

This is a re-play from the Disruption Now Summit 2020. Artificial Intelligence is less impactful unless you understand the human connections and problems you are seeking to solve. Artificial intelligence must have empathy and be more connected with the communities it impacts. Learn the tricks of the trade and the latest trends in automation.

CONNECT WITH THE HOST

Rob Richardson

Entrepreneur & Keynote Speaker

Rob Richardson is the host of disruption Now Podcast and the owner of DN Media Agency, a full-service digital marketing and research company. He has appeared on MSNBC, America this Week, and is a weekly contributor to Roland Martin Unfiltered.

MORE WAYS TO WATCH

Serious about change? Subscribe to our podcasts.