Saturday, April 20, 2019
Thích Nhất Hạnh (3:24:36)
Thích Nhất Hạnh (3:24:36)
3:24:36
Thich Nhat Hanh: "Mindfulness as a Foundation for Health" | Talks at Google
https://www.youtube.com/watch?v=Ijnt-eXukwk
https://www.youtube.com/watch?v=Ijnt-eXukwk
Talks at Google
Published on Sep 23, 2011
Thursday, April 18, 2019
Nicholas Carr, Kai Fu Lee, Neil Postman
Nicholas Carr,
([ 3 views on Artificial Intelligence ])
Andrew Ng, Kai Fu Lee, Yann LeCun,
Neil Postman
56:06
Nicholas Carr with Timothy Wu: The Glass Cage
https://youtu.be/m2JMDk3XoM0?t=16
https://youtu.be/m2JMDk3XoM0?t=16
https://www.youtube.com/watch?v=m2JMDk3XoM0
https://www.youtube.com/watch?v=m2JMDk3XoM0
92Y Plus
Published on Dec 16, 2014
29:18
Andrew Ng - The State of Artificial Intelligence
https://youtu.be/NKpuX_yzdYs?t=23
https://youtu.be/NKpuX_yzdYs?t=23
https://www.youtube.com/watch?v=NKpuX_yzdYs
https://www.youtube.com/watch?v=NKpuX_yzdYs
The Artificial Intelligence Channel
Published on Dec 15, 2017
50:06
Dr Kai-Fu Lee - The Future of Artificial Intelligence
https://www.youtube.com/watch?v=sGmuuppF8lo
https://www.youtube.com/watch?v=sGmuuppF8lo
The Artificial Intelligence Channel
Published on Sep 5, 2017
This talk took place on August 29, 2017
36:46
Yann LeCun - Power & Limits of Deep Learning
https://youtu.be/0tEhw5t6rhc?t=141
https://youtu.be/0tEhw5t6rhc?t=141
https://www.youtube.com/watch?v=0tEhw5t6rhc
https://www.youtube.com/watch?v=0tEhw5t6rhc
The Artificial Intelligence Channel
Published on Nov 19, 2017
ABC
A AI - artificial intelligence (Machine learning,
Data mining, Deep Learning)
this neural network machine deep learning
https://en.wikipedia.org/wiki/Deep_learning
not this educational meaning of deep learning
https://en.wikipedia.org/wiki/Deeper_learning
B Big data (Data science)
https://en.wikipedia.org/wiki/Big_data
https://en.wikipedia.org/wiki/Data_science
C Cloud computing - Amazon Web Services (AWS),
Microsoft Azure,
IBM cloud,
Google Cloud Platform (GCP),
([
6:48
Theory of Constraints (TOC) 3 Bottle Oiled Wheels Demonstration
https://youtu.be/mWh0cSsNmGY?t=16
https://www.youtube.com/watch?v=mWh0cSsNmGY
Arrie van Niekerk
Published on Jul 5, 2012
])
Technology
Computer Sciences
October 16, 2015
System that replaces human intuition with algorithms outperforms human teams
by Larry Hardesty, Massachusetts Institute of Technology
http://phys.org/news/2015-10-human-intuition-algorithms-outperforms-teams.html
http://groups.csail.mit.edu/EVO-DesignOpt/groupWebSite/uploads/Site/DSAA_DSM_2015.pdf
Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which "features" of the data to analyze usually requires some human intuition.
"What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering," Veeramachaneni says. "The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas."
([
key ideas
data science/ extraction of knowledge from data //
the order in which we learn things does matter //
learn simple concept before learning more complex/abstract ones //
covariance/ structural homology/
homology/ the existence of shared ancestry between a pair of structures //
ontology/ where to put things/ ways to organize things //
The catastrophes are the surprises: all else is mere repetition //
morphology -
1. the branch of biology that deals with the form and
structure of animals and plants
3. any scientific study of form and structure,
as in physical geography
4. form and structure, as of an organism, regarded as a whole //
• "Data Science", which is the automatic (or semi-automatic) extraction of knowledge from data.;── Yann LeCun (self.MachineLearning).
• the goal of extracting information from data.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman.
• ... and thus discover something about data that will be seen in the future.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman.
• All algorithms for analysis of data are designed to produce a useful summary of the data, from which decisions are made.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman, Mining of massive datasets, 2010.'; http://infolab.stanford.edu/~ullman/mmds/book.pdf
data science/ DS/ machine learning/ ML/ unsupervised feature learning/
unsupervised learning/ computer science/ CS/ science fiction/ SF/ sci-fi/
fantasy/ fa/ fiction/ fi/ Finland/ fi/ reinforcement learning/ RL/
deep learning/ DL/ artificial intelligence/ AI/ expert systems/
representation learning/ RL/
natural language processing / NLP/
Natural Language Speech Recognition/ NLSR/
Speaker-Independent Natural Language Speech Recognition/ SI-NLSR or NLSR/
Gary Klein, Sources of power : how people make decision, 1998 [ ]
p.276
One definition of uncertainty (paraphrasing Lipshitz and Shaul 1997) is “doubt that threatens to block action”. Key pieces of information are missing, unreliable, ambiguous, inconsistent, or too complex to interpret, and as a result a decision maker will be reluctant to act. In many cases, the action will be delayed or will be overtaken by events as windows of opportunity close.
p.277
Schmitt and Klein (1996) identified four sources of uncertainty:
1. Missing information. Information is unavailable. It has not been received or has been received but cannot be located when needed.
2. Unreliable information. The credibility of the source is low, or is perceived to be low even if the information is highly accurate.
3. Ambiguous or conflicting information. There is more than one reasonable way to interpret the information.
4. Complex information. It is difficult to integrate the different facets of the data.
We can also identify several different levels of uncertainty: the level of data; the level of knowledge, in which inferences are drawn about the data; and the level of understanding, in which the inferences are synthesized into projections of the future, into diagnoses and explanations of events.
(Klein, Gary, Sources of power : how people make decision / Gary Klein., 1. decision-making., 1998, 2001, 685.403, MIT Press, )
])
19:47
Automation: Last Week Tonight with John Oliver (HBO)
https://www.youtube.com/watch?v=_h1ooyyFkF0
https://www.youtube.com/watch?v=_h1ooyyFkF0
LastWeekTonight
Published on Mar 3, 2019
([
„Machine learning is a mathematical technique for training computer systems to make accurate predictions from a large corpus of training data, with a degree of accuracy that in some domains can mimic human cognition.“
—— Maciej Ceglowski,
May 7, 2019,
US Senate Committee on Banking, Housing, and Urban Affairs
on Privacy Rights and Data Collection in a Digital Economy
<< long read - scroll down to skip this section >>
Maciej Ceglowski's Senate testimony on Privacy Rights and Data Collection in a Digital Economy
May 7, 2019,
Senate Committee on Banking, Housing, and Urban Affairs
Privacy Rights and Data Collection in a Digital Economy (Senate hearing)
privacy
pinboard
regulation
gdpr
long read
https://idlewords.com/talks/senate_testimony.2019.5.htm
Consent in a world of inference
For example, imagine that an algorithm could inspect your online purchasing history and, with high confidence, infer that you suffer from an anxiety disorder. Ordinarily, this kind of sensitive medical information would be protected by HIPAA, but is the inference similarly protected? What if the algorithm is only reasonably certain? What if the algorithm knows that you’re healthy now, but will suffer from such a disorder in the future?
The question is not hypothetical—a 2017 study showed that a machine learning algorithm examining photos posted to the image-sharing site Instagram was able to detect signs of depression before it was diagnosed in the subjects, and outperformed medical doctors on the task.
Addendum: Machine Learning and Privacy
Machine learning is a mathematical technique for training computer systems to make accurate predictions from a large corpus of training data, with a degree of accuracy that in some domains can mimic human cognition.
For example, machine learning algorithms trained on a sufficiently large data set can learn to identify objects in photographs with a high degree of accuracy, transcribe spoken language to text, translate texts between languages, or flag anomalous behavior on a surveillance videotape.
The mathematical techniques underpinning machine learning, like convolutional neural networks (CNN), have been well-known since before the revolution in machine learning that took place beginning in 2012. What enabled the key breakthrough in machine learning was the arrival of truly large collections of data, along with concomitant [accompanies or is collaterally connected with] computing power, allowing these techniques to finally demonstrate their full potential.
It takes data sets of millions or billions of items, along with considerable computing power, to get adequate results from a machine learning algorithms. Before the advent of the surveillance economy, we simply did not realize the power of these techniques when applied at scale.
Because machine learning has a voracious appetite for data and computing power, it contributes both to the centralizing tendency that has consolidated the tech industry, and to the pressure companies face to maximize the collection of user data.
Machine learning models poses some unique problems in privacy regulation because of the way they can obscure the links between the data used to train them and their ultimate behavior.
A key feature of machine learning is that it occurs in separable phases. An initial training phase consists of running a learning algorithm on a large collection of labeled data (a time and computation-intensive process). This model can then be deployed in an exploitation phase, which requires far fewer resources.
Once the training phase is complete, the data used to train the model is no longer required and can conceivably be thrown away.
The two phases of training and exploitation can occur far away from each other both in space and time. The legal status of models trained on personal data under privacy laws like the GDPR, or whether data transfer laws apply to moving a trained model across jurisdictions, is not clear.
Inspecting a trained model reveals nothing about the data that went into it. To a human inspecting it, the model consists of millions and millions of numeric weights that have no obvious meaning, or relationship to human categories of thought. One cannot examine an image recognition model, for example, and point to the numbers that encode ‘apple’.
The training process behaves as a kind of one-way function. It is not possible to run a trained model backwards to reconstruct the input data; nor is it possible to “untrain” a model so that it will forget a specific part of its input.
Machine learning algorithms are best understood as inference engines. They find structure and excel at making inferences from data that can sometimes be surprising even to people familiar with the technology. This ability to see patterns that humans don’t notice has led to interest in using machine learning algorithms in medical diagnosis, evaluating insurance risk, assigning credit scores, stock trading, and other fields that currently rely on expert human analysis.
The opacity of machine learning models, combined with this capacity for inference, also make them an ideal technology for circumventing legal protections on data use. In this spirit, I have previously referred to machine learning as “money laundering for bias”. Whatever latent biases are in the training data, whether or not they are apparent to humans, and whether or not attempts are made to remove them from the data set, will be reflected in the behavior of the model.
A final feature of machine learning is that it is curiously vulnerable to adversarial inputs. For example, an image classifier that correctly identifies a picture of a horse might reclassify the same image as an apple, sailboat or any other object of an attacker’s choosing if they can manipulate even one pixel in the image. Changes in input data not noticeable to a human observer will be sufficient to persuade the model. Recent research suggests that this property is an inherent and ineradicable feature of any machine learning system that uses current approaches.
In brief, machine learning is effective, has an enormous appetite for data, requires large computational resources, makes decisions that resist analysis, excels at finding latent structure in data, obscures the link between source data and outcomes, defies many human intuitions, and is readily fooled by a knowledgeable adversary.
—Maciej Ceglowski, 2019
source:
https://tildes.net/~tech
])
([
1:27:10
Complete Machine Learning Course for Beginners || Machine Learning Tutorial for Beginners
https://youtu.be/J1_A-rdNBNQ
https://youtu.be/J1_A-rdNBNQ
https://www.youtube.com/watch?v=J1_A-rdNBNQ
https://www.youtube.com/watch?v=J1_A-rdNBNQ
Geek's Lesson
Published on Jun 13, 2018
])
▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀
1:32:12
Eric Schmidt in conversation with James Cameron
https://youtu.be/dBse9xISbXQ?t=1119
https://youtu.be/dBse9xISbXQ?t=1119
https://youtu.be/dBse9xISbXQ?t=855
https://youtu.be/dBse9xISbXQ?t=855
https://www.youtube.com/watch?v=dBse9xISbXQ
https://www.youtube.com/watch?v=dBse9xISbXQ
Published on Oct 18, 2011
Saturday interview
Joseph Stiglitz on artificial intelligence: 'We’re going towards a more divided society'
The technology could vastly improve lives, the economist says – but only if the tech titans that control it are properly regulated. ‘What we have now is totally inadequate’
by Ian Sample Science editor
Sat 8 Sep 2018 02.00 EDT
https://www.theguardian.com/technology/2018/sep/08/joseph-stiglitz-on-artificial-intelligence-were-going-towards-a-more-divided-society
Joseph Stiglitz
Stiglitz poses a question that he suspects tech firms have faced internally. “Which is the easier way to make a buck: figuring out a better way to exploit somebody, or making a better product? With the new AI, it looks like the answer is finding a better way to exploit somebody.”
The 2018 AAAI Spring Symposium Series
The Potential Social Impact of the Artificial Intelligence Divide
Andrew B. Williams
Humanoid Engineering & Intelligent Robotics Lab
University of Kansas
andrew.williams@ku.edu
https://aaai.org/ocs/index.php/SSS/SSS18/paper/download/17539/15382
A person’s dignity is closely tied to their ability to work and contribute to society and their family’s well-being.
Kelly, Kevin, 1952—
What technology wants / Kevin Kelly,
1. technology—social aspects.
2. technology and civilization.
T14.5.K45 2010
303.48'3—dc22
copyright © 2010
https://drive.google.com/open?id=1dDEBRwp3XyIKgu_bUyp18nf_9OOmpyyJ
pp.11—12
I dislike inventing words that no one else uses, but in this case all known alternatives fail to convey the required scope. So I've somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the technium. The technium extends beyond shiny hardware include culture, art, social institution, and intellectual creations of all types. It includes intangibles like software, law, and philosophical concepts. And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections. For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in "technology accelerates"). I reserve the term technology to mean a specific technology, such as radar or plastic polymers. For example, I would say: "The technium accelerates the invention of technologies." In other words, technologies can be patented, while the technium includes the patent system itself.
([
“ Once a development path is set on a particular course, then network externalities, the learning process of organizations, and the historical derived modelling of the issues reinforces the course.”
― Douglas North
• Uncertainty and path dependence
• https://www.youtube.com/watch?v=KKfkQW7_-Pg
• https://www.youtube.com/watch?v=KKfkQW7_-Pg
1:42:42
The Predictioneer's Game
42:47 (start)
https://youtu.be/XfE0ih-6fi8?t=2567
https://youtu.be/XfE0ih-6fi8?t=2567
44:00 (stop)
0:16 (start of talk)
https://youtu.be/XfE0ih-6fi8?t=16
1:07:10 (end of talk)
https://youtu.be/XfE0ih-6fi8?t=4030
https://www.youtube.com/watch?v=XfE0ih-6fi8
https://www.youtube.com/watch?v=XfE0ih-6fi8
NYUAD Institute
Published on Sep 15, 2015
The Predictioneer's Game
December 9, 2009
Bruce Bueno de Mesquita will discuss how applied game theory can be used to anticipate policy choices whether in business or in government.
The Predictioneer's Game
https://slideplayer.com/slide/4437069/
])
▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀
p.194
Often we will invent a machine for a particular and limited purpose, and then, in what Neil Postman calls the Frankenstein syndrome, the invention's own agenda blossoms. "Once the machine is built," Postman writes, "we discover, always to our surprise——that it has ideas of its own; that it is quite capable not only of changing our habits but ... of changing our habits of mind." In this way, humans have become an adjunct to or, in Karl Marx's phrase, appendages of the machine.
p.196
In 1997, I interviewed [George] Lucas ... . ... [...] ... I asked him, "Do you think technology is making the world better or worse" Lucas's answer:
If you watch the curve of science and everything we know, it
shoots up like a rocket. We're on this rocket and we're going
perfectly vertical into the stars. But the emotional intelligence
of humankind is equally if not more important than
our intellectual intelligence. We're just as emotionally illiterate
as we were 5,000 years ago; so emotionally our line is
completely horizontal. The problem is the horizontal and
the vertical are getting farther and farther apart. And as
these things grow apart, there's going to be some kind of
consequence of that.
I think we underestimate the strain of that gap.
(Kelly, Kevin, 1952—, T14.5.K45 2010, 303.48'3—dc22, copyright © 2010)
(What technology wants / Kevin Kelly, 1. technology—social aspects., 2. technology and civilization., )
([
https://en.wikipedia.org/wiki/Agrarian_society
https://en.wikipedia.org/wiki/Industrial_Revolution
https://en.wikipedia.org/wiki/The_Second_Machine_Age
https://en.wikipedia.org/wiki/Agrarian_society
12:05
Mesopotamia: Crash Course World History #3
https://www.youtube.com/watch?v=sohXPx_XZ6Y
https://www.youtube.com/watch?v=sohXPx_XZ6Y
CrashCourse
Published on Feb 9, 2012
https://en.wikipedia.org/wiki/Industrial_Revolution
11:04
Coal, Steam, and The Industrial Revolution: Crash Course World History #32
https://www.youtube.com/watch?v=zhL5DCizj5c
https://www.youtube.com/watch?v=zhL5DCizj5c
CrashCourse
Published on Aug 30, 2012
12:31
The Industrial Economy: Crash Course US History #23
https://www.youtube.com/watch?v=r6tRp-zRUJs
https://www.youtube.com/watch?v=r6tRp-zRUJs
CrashCourse
Published on Jul 25, 2013
General Purpose Technologies "Engines of Growth?"
Timothy F. Bresnahan, Manuel Trajtenberg
NBER Working Paper No. 4148 (Also Reprint No. r2008)
Issued in August 1992
https://www.nber.org/papers/w4148
https://www.nber.org/papers/w4148.pdf
Friedman, Thomas L.
The world is flat : a brief history of the 21st century / Thomas L. Friedman -- 1st rev. and expanded ed.
1. diffusion of innovations
2. information society
3. globalization--economic aspects
4. globalization--social aspects
2005, 2006
303.4833
https://drive.google.com/open?id=15SPosAZbzOeMG2INpHKm6-iXCrBAguSp
pp.204-205, p.207, p.208, p.208
pp.204-205
As Stanford University economist Paul Romer pointed out, economists have known for a long time that “there are goods that are complementary -- whereby good A is a lot more valuable if you also have good B. It was good to have paper and then it was good to have pencils, and soon as you got more of one you got more of the other, and as you got a better quality of one and better quality of the other, your productivity improved. This is known as the simultaneous improvement of complementary goods.”
p.207
1989 essay, Computer and Dynamo: the modern productivity paradox in a not-too distant mirror, economic historian Paul A. David
In a pathbreaking 1989 essay, “Computer and Dynamo: the modern productivity paradox in a not-too distant mirror,” the economic historian Paul A. David explained such a lag by pointing to a historical precedent. He noted that while the lightbulb was invented in 1879, it took several decades for electrification to kick in and have a big economic and productivity impact. Why? Because it was not enough just to install electric motors and scrap the old technology -- steam engines. The whole way of doing manufacturing had to be reconfigured. In the case of electricity, David pointed out, the key breakthrough was in how buildings, and assembly lines, were redesigned and managed. Factories in the steam age tended to be heavy, costly multistory buildings designed to brace the weighty belts and other big transmission devices needed to drive steam-powered systems. Once small, powerful electric motors were introduced, everyone hoped for a quick productivity boost. It took time, though. To get all the savings, you needed to redesign enough buildings. You needed to have long, low, cheaper-to-build single-story factories, with small electric motors powering machines of all sizes. Only when there was a critical mass of experienced factory architects and electrical engineers and managers, who understood the complementaries among the electric motor, the redesign of the factory, and the redesign of the production line, did electrification really deliver the productivity breakthrough in manufacturing, David wrote.
p.208
Many of the 10 flatteners have been around for years. But for the full flattening effects to be felt, we needed not only the 10 flatteners to converge but also something else. We needed the emergence of a large cadre of mangers, innovators, business consultants, business schools, designers, IT specialists, CEOs, and workers to get comfortable with, and develop, the sorts of horizontal collaboration and value-creation processes and habits that could take advantage of this new, flatter playing field. In short, the convergence of the 10 flatteners begat the convergence of a set of business practices and skills that would get the most out of the flat world. And then the two began to mutually reinforce each other.
p.208
Stanford University economist Paul Romer
“When people asked, ‘Why didn't the IT [Information Technology - computers, embedded computing device, mass storage, software, networking, Apple iphone and other like technology, having a processing unit and programmable] revolution lead to more productivity right away?’ it was because you needed more than just new computers,” said Romer. “You needed new business processes and new types of skills to go with them. The new way of doing things makes the information technologies more valuable, and the new and better information technologies make the new ways of doing things more possible.”
(Friedman, Thomas L., The world is flat : a brief history of the 21st century / Thomas L. Friedman -- 1st rev. and expanded ed., 1. diffusion of innovations, 2. information society, 3. globalization--economic aspects, 4. globalization--social aspects, 2005, 2006, 303.4833, pp.204-205, p.207, p.208, p.208)
])
▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀
Nicholas Carr's blog
On autopilot: the dangers of over automation
The grounding of Boeing’s popular new 737 Max 8 planes, after two recent crashes, has placed a new focus on flight automation. Here’s an excerpt from my [Nicholas Carr] 2014 book on automation and its human consequences, The Glass Cage, that seems relevant to the discussion.
http://www.roughtype.com/?p=8622
Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview, he put the problem in stark terms: “We’re forgetting how to fly.”
▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀
28:35
Neil Postman Are We Amusing Ourselves to Death Part I, Dec. 1985
https://www.youtube.com/watch?v=FRabb6_Gr2Y
https://www.youtube.com/watch?v=FRabb6_Gr2Y
ashikmlakonja
Published on Dec 18, 2011
28:54
Neil Postman Are We Amusing Ourselves to Death Part II, Jan. 1986
https://www.youtube.com/watch?v=zHd31L6XPEQ
https://www.youtube.com/watch?v=zHd31L6XPEQ
ashikmlakonja
Published on Dec 19, 2011
1:25:12
College Lecture Series - Neil Postman - "The Surrender of Culture to Technology"
https://youtu.be/hlrv7DIHllE?t=173
https://youtu.be/hlrv7DIHllE?t=173
https://www.youtube.com/watch?v=hlrv7DIHllE
https://www.youtube.com/watch?v=hlrv7DIHllE
College of DuPage
Published on Jun 3, 2013
A lecture delivered by Neil Postman on Mar. 11, 1997 in the Arts Center. Based on the author's book of the same title. Neil Postman notes the dependence of Americans on technological advances for their own security. Americans have come to expect technological innovations to solve the larger problems of mankind. Technology itself has become a national "religion" which people take on faith as the solution to their problems.
7 questions
1. what is the problem to which this technology is a solution?
2. whose problem is it?
3. suppose we solve this problem, and solve it decisively, what new problems might be created because we have solved the problem?
4. which people and what institutions might be most seriously harmed by a technological solution
5. what changes in language are being enforced by new technologies?
what is being gained and what is being lost by such changes?
6. what sort of people and institution acquire special economic and political power, because of technological change?
this question needs to be asked, because the transformation of a technology into medium always results in a realignment of economic and political power.
7. what alternative uses might be made of a technology the one proceeds here by assuming that any medium we have created is not necessarily the only one we might make of a particular technology
https://youtu.be/hlrv7DIHllE?t=1035
1. what is the problem to which this technology is a solution?
now this question needs to be asked, because there are technologies that are not solution to any problem that a normal person would regard as significant
https://youtu.be/hlrv7DIHllE?t=1440
2. whose problem is it?
but this question, whose problem is it, needs to be applied to any technologies. most technologies do solve some problem, but the problem may not be everybody's problem or even most people's problem. we need to be very careful in determining who will benefit from a technology, and who will pay for it. they are not always the same people.
https://youtu.be/hlrv7DIHllE?t=1521
3. suppose we solve this problem, and solve it decisively, what new problems might be created because we have solved the problem?
the automobile solves some very important problems for most people
https://youtu.be/hlrv7DIHllE?t=1740
4. which people and what institutions might be most seriously harmed by a technological solution
https://youtu.be/hlrv7DIHllE?t=2259
5. what changes in language are being enforced by new technologies?
what is being gained and what is being lost by such changes?
https://youtu.be/hlrv7DIHllE?t=2746
6. what sort of people and institution acquire special economic and political power, because of technological change?
this question needs to be asked, because the transformation of a technology into medium always results in a realignment of economic and political power.
https://youtu.be/hlrv7DIHllE?t=2925
7. what alternative uses might be made of a technology the one proceeds here by assuming that any medium we have created is not necessarily the only one we might make of a particular technology
https://youtu.be/hlrv7DIHllE?t=3037
1. what is the problem to which a technology claims to be the solution
2. whose problem is it
3. what new problems will be created because of solving an old one
4. which people in institutions will be most harmed
5. what changes in language are being promoted
6. what shifts in economic and political power are likely to result
7. what alternative media might be made from a technology
automobile, television, computer
the same blindness, no one is asking anything worth asking
https://youtu.be/hlrv7DIHllE?t=3629
60:29 Tocqueville says in democracy in America
([
the video above has really low volume, without using a headphone, if you want to boost your laptop or pc speaker loudness level, check out the following video
TIL how to make video louder in windows 10
https://www.google.com/search?&q=how+to+make+video+louder+in+windows+10
this was the result
How to Increase the Maximum Volume in Windows 10 - YouTube
https://www.youtube.com/watch?v=R1sv5bsC6cg
https://www.youtube.com/watch?v=R1sv5bsC6cg
I access the same feature this way:
right click on the 'Speaker icon' on lower right hand corner of the screen located on the taskbar
a pop-up menu should show up
select 'Sounds'
a window should pop-up, named Sound
click the 'Playback' tab ([1st tab reading from left to right])
select 'Speakers'
click on the 'Properies' button ([lower right corner of window])
a window should pop-up, named Speakers Properties
click on 'Enhancements' tab
check box 'Loudness Equalization'
click on the 'Apply' button ([bottom right of window])
OK
OK
])
Friday, April 12, 2019
Yuval Noah Harari,Tristan Harris
56:39
Yuval Noah Harari and Tristan Harris interviewed by Wired
https://www.youtube.com/watch?v=v0sWeLZ8PXg
https://www.youtube.com/watch?v=v0sWeLZ8PXg
Yuval Noah Harari
Published on Dec 3, 2018
Yuval Noah Harari and Tristan Harris are interviewed by Wired's editor-in-chief, Nicholas Thompson.
https://youtu.be/v0sWeLZ8PXg?t=180
03:00 Starting when I was a magician as a kid,
03:02 where you learned that there's things
03:03 that work on all human minds,
03:05 it doesn't matter whether they have a PhD
03:06 or whether they, you know, what education level they have,
03:09 whether they're a nuclear physicist,
03:10 what age they are, it's not like,
03:12 oh, if you speak Japanese I can't do this trick on you,
03:14 it's not gonna work, or you have a PhD,
03:16 it works on everybody.
03:17 So somehow there's this discipline
03:19 which is about universal exploits on all human minds.
https://youtu.be/v0sWeLZ8PXg?t=308
05:08 Now people, some people, corporations, governments,
05:12 they are gaining the technology to hack human beings.
05:23 - Well, explain what it means to hack a human being,
05:27 and why what can be done now is different from
05:30 what could be done a hundred years ago with religion
05:34 or with a book or with anything else
05:40 - To hack a human being is to understand
05:42 what's happening inside you on the level of the body,
05:45 of the brain, of the mind,
05:47 so that you can predict what people will do,
05:50 you can understand how they feel,
05:53 and you can, of course, once you understand and predict,
05:55 you can usually also manipulate and control
05:58 and even replace.
06:00 And of course it can't be done perfectly,
06:03 and it was possible to do it
06:04 to some extent also a century ago,
06:07 but the difference in the level is significant.
06:22 The algorithms that are trying to hack us,
06:25 they will never be perfect,
06:26 there is no such thing as understanding perfectly everything
06:30 or predicting everything.
06:32 You don't need perfect,
06:34 you just need to be better than the average human being.
https://youtu.be/v0sWeLZ8PXg?t=922
15:22 One of the interesting things that I've been following
15:24 is also the ways you can ascertain those signals
15:27 without an invasive sensor,
15:29 and we were talking about this a second ago.
15:31 There's something called Euler video magnification,
15:35 where you point a computer camera at a person's face,
15:39 and a human being can't look at,
15:40 I can't look at your face and see your heart rate.
15:43 My intelligence doesn't let me see that.
15:45 - You can see my eyes dilating, right,
15:47 and you can see-- - But I can see
15:48 your eyes dilating, so-- - 'Cause I'm terrified of you.
15:49 - But if I put-- (everyone laughs)
15:51 If I put a supercomputer behind the camera,
15:54 I can actually run a mathematical equation,
15:57 and I can find the micropulses of blood to your face
16:00 that I as a human can't see but the computer can see,
16:02 so I can pick up your heart rate.
16:04 What does that let me do?
16:04 I can pick up your stress level
16:06 because heart rate variability gives you your stress level.
Fujitsu technology measures a person’s real time pulse using facial imaging
Dhiram Shah
on 18 March, 2013 at 11:24
https://fareastgizmos.com/computing/fujitsu-technology-measures-a-persons-real-time-pulse-using-facial-imaging.php
Fujitsu Laboratories Develops Real-Time Pulse Monitor Using Facial Imaging
Measures pulse in as little as five seconds using built-in cameras in PCs or smartphones, enabling ongoing health tracking
Fujitsu Laboratories Ltd.
Kawasaki, Japan, March 18, 2013
Date: 18 March, 2013
City: Kawasaki, Japan
Company: Fujitsu Laboratories Ltd
https://www.fujitsu.com/global/about/resources/news/press-releases/2013/0318-01.html
Fujitsu Laboratories Limited today announced that it has developed a technology to measure a person's pulse in real time using a built-in camera or webcam in a PC, smartphone or tablet.
2:13
Measure your pulse in real time with Fujitsu's facial imaging technology #ipnexus
https://www.youtube.com/watch?v=_zVLwIFtzic
https://www.youtube.com/watch?v=_zVLwIFtzic
IP Nexus
Published on Feb 17, 2015
Fujitsu has developed technology which can measure a person's pulse in real time by analyzing video of their face.
https://people.csail.mit.edu/mrub/papers/vidmag.pdf
16:08 I can point, there's a woman named Poppy Crum
16:10 who gave a TED talk this year
16:12 about the end of the poker face,
<< updated post 5/15/2019 15 May 2019 Earth Solar year 2019 >>
'Poker face' stripped away by new-age tech
Published Apr 14, 2018, 3:23 pm SGT
VANCOUVER (AFP) - Dolby Laboratories chief scientist Poppy Crum tells of a fast-coming time when technology will see right through people no matter how hard they try to hide their feelings.
Sensors combined with artificial intelligence can reveal whether someone is lying, infatuated, or poised for violence, Dr Crum detailed at a big ideas TED Conference.
"It is the end of the poker face," Dr Crum said.
https://www.straitstimes.com/world/americas/poker-face-stripped-away-by-new-age-tech
Dolby Laboratories chief scientist Poppy Crum sees a fast-coming time when technology will see right through people no matter how hard they try to hide their feelings.PHOTO: AFP
<< initial post >>
'The end of the poker face': New technology poised to read emotions
AFP-JIJI
Apr 15, 2018
VANCOUVER, BRITISH COLUMBIA - Dolby Laboratories chief scientist Poppy Crum tells of a fast-coming time when technology will see right through people no matter how hard they try to hide their feelings.
<< Article expired >>
https://www.japantimes.co.jp/news/2018/04/15/world/science-health-world/end-poker-face-new-technology-poised-read-emotions/
<< The article you have been looking for has expired and is not longer available on our system. This is due to newswire licensing terms. >>
Dolby Laboratories chief scientist Poppy Crum poses for a photo after speaking at a TED Conference in Vancouver, British Columbia, on Thursday about coming technology that will reveal hidden feelings or even lies. | AFP-JIJI
https://youtu.be/v0sWeLZ8PXg?t=979
16:19 But this talk is about the erosion of that,
16:21 that we can point a camera at your eyes
16:23 and see when your eyes dilate,
16:25 which actually detects cognitive strains
16:26 when your having a hard time understanding something
16:28 or an easy time understanding something.
16:30 And we can continually adjust this
16:32 based on your heart rate, your eye dilation.
16:47 your big five personality traits,
16:50 if I know Nick Thompson's personality
16:53 through his openness, OCEAN, openness, conscientiousness,
16:56 extrovertedness, agreeableness and neuroticism,
16:59 that gives me your personality,
17:00 and based on your personality
17:02 I can tune a political message to be perfect for you.
https://youtu.be/v0sWeLZ8PXg?t=1037
17:17 But now this woman named Gloria Mark at UC Irvine
17:20 who's done a research showing
17:22 you can actually get people's big five personality traits
17:25 just by their click patterns alone, with 80% accuracy,
17:35 We're gonna be able to point AIs at human animals
17:38 and figure out more and more signals from them,
17:40 including their micro-expressions,
17:41 when you smirk and all these things.
17:42 We've got face-ID cameras on all of these phones,
17:46 so now if you have a tight loop
17:47 where I can adjust the political messages in real time
17:50 to your heart rate and to your eye dilation
17:53 and to your political personality,
17:56 that's not a world you want to live in,
17:57 it's a kind of dystopia.
([
„Machine learning is a mathematical technique for training computer systems to make accurate predictions from a large corpus of training data, with a degree of accuracy that in some domains can mimic human cognition.“
—— Maciej Ceglowski,
May 7, 2019,
US Senate Committee on Banking, Housing, and Urban Affairs
on Privacy Rights and Data Collection in a Digital Economy
<< long read - scroll down to skip this section >>
Maciej Ceglowski's Senate testimony on Privacy Rights and Data Collection in a Digital Economy
May 7, 2019,
Senate Committee on Banking, Housing, and Urban Affairs
Privacy Rights and Data Collection in a Digital Economy (Senate hearing)
https://idlewords.com/talks/senate_testimony.2019.5.htm
Consent in a world of inference
For example, imagine that an algorithm could inspect your online purchasing history and, with high confidence, infer that you suffer from an anxiety disorder. Ordinarily, this kind of sensitive medical information would be protected by HIPAA, but is the inference similarly protected? What if the algorithm is only reasonably certain? What if the algorithm knows that you’re healthy now, but will suffer from such a disorder in the future?
The question is not hypothetical—a 2017 study showed that a machine learning algorithm examining photos posted to the image-sharing site Instagram was able to detect signs of depression before it was diagnosed in the subjects, and outperformed medical doctors on the task.
—Maciej Ceglowski, 2019
source:
https://tildes.net/~tech
])
0:36
Minority Report Mall Scene
https://www.youtube.com/watch?v=oBaiKsYUdvg
https://www.youtube.com/watch?v=oBaiKsYUdvg
fayizaugusto
Published on Feb 28, 2008
This is a scene from the movie Minority Report. Tom Cruise walks in the mall while his eyes are getting scanned by 3D screens. The screens call him directly by his name to get his attention.
1:02
Minority Report - Personal Advertising in the Future
https://www.youtube.com/watch?v=7bXJ_obaiYQ
https://www.youtube.com/watch?v=7bXJ_obaiYQ
dscmailtest
Published on Dec 7, 2010
The future of personal advertising according to the film Minority Report
3:48
Minority Report tech: 15 years later
https://www.youtube.com/watch?v=euJdKsOYnXk
https://www.youtube.com/watch?v=euJdKsOYnXk
CNN Business
Published on Jun 26, 2017
It's been 15 years since the release of "Minority Report," a film that predicted the future with surprising precision. Take a look at what technologies the film foretold and what technologies might still be to come.
Thursday, April 4, 2019
West (西) and East (東)
45:42
([ ??? ])
West and East, Cultural Differences
https://www.youtube.com/watch?v=ZoDtoB9Abck
https://www.youtube.com/watch?v=ZoDtoB9Abck
Christian R. Bueno
Published on Dec 5, 2012
42:53
([ ??? ])
West and East, Cultural Differences 2/2
https://www.youtube.com/watch?v=jLh4QZDyNUA
https://www.youtube.com/watch?v=jLh4QZDyNUA
Christian R. Bueno
Published on Dec 5, 2012
12:03
EAST or WEST: Which mindset do you have?
https://www.youtube.com/watch?v=yWuQ063_-ts
https://www.youtube.com/watch?v=yWuQ063_-ts
Off the Great Wall
Published on Jun 13, 2016
22:39
Chinese Philosophy An introduction to an introduction
https://www.youtube.com/watch?v=0N4bnv17Scg
https://www.youtube.com/watch?v=0N4bnv17Scg
Högskolan Dalarna
Published on Apr 18, 2011
5:22
You're such an essentialist
https://www.youtube.com/watch?v=2WSbcd4zYak
https://www.youtube.com/watch?v=2WSbcd4zYak
Julia Galef
Published on Jan 23, 2012
19:17
Why Do We Like What We Like?
https://www.youtube.com/watch?v=FREh6GyHk1k
https://www.youtube.com/watch?v=FREh6GyHk1k
The RSA
Published on Jul 18, 2011
Acclaimed evolutionary psychologist Paul Bloom reveals how certain universal aspects of the human mind explain our curious desires, tastes and pleasures.
12:53
Do Asians THINK Differently?
https://www.youtube.com/watch?v=aEd7msMYLgU
https://www.youtube.com/watch?v=aEd7msMYLgU
Creatively eXplained
Published on Sep 23, 2018
([ ??? ])
West and East, Cultural Differences
https://www.youtube.com/watch?v=ZoDtoB9Abck
https://www.youtube.com/watch?v=ZoDtoB9Abck
Christian R. Bueno
Published on Dec 5, 2012
42:53
([ ??? ])
West and East, Cultural Differences 2/2
https://www.youtube.com/watch?v=jLh4QZDyNUA
https://www.youtube.com/watch?v=jLh4QZDyNUA
Christian R. Bueno
Published on Dec 5, 2012
12:03
EAST or WEST: Which mindset do you have?
https://www.youtube.com/watch?v=yWuQ063_-ts
https://www.youtube.com/watch?v=yWuQ063_-ts
Off the Great Wall
Published on Jun 13, 2016
22:39
Chinese Philosophy An introduction to an introduction
https://www.youtube.com/watch?v=0N4bnv17Scg
https://www.youtube.com/watch?v=0N4bnv17Scg
Högskolan Dalarna
Published on Apr 18, 2011
5:22
You're such an essentialist
https://www.youtube.com/watch?v=2WSbcd4zYak
https://www.youtube.com/watch?v=2WSbcd4zYak
Julia Galef
Published on Jan 23, 2012
19:17
Why Do We Like What We Like?
https://www.youtube.com/watch?v=FREh6GyHk1k
https://www.youtube.com/watch?v=FREh6GyHk1k
The RSA
Published on Jul 18, 2011
Acclaimed evolutionary psychologist Paul Bloom reveals how certain universal aspects of the human mind explain our curious desires, tastes and pleasures.
12:53
Do Asians THINK Differently?
https://www.youtube.com/watch?v=aEd7msMYLgU
https://www.youtube.com/watch?v=aEd7msMYLgU
Creatively eXplained
Published on Sep 23, 2018
Tuesday, April 2, 2019
professional knowledge of the laws of learning
https://www.edsurge.com/news/2019-03-27-hoping-to-spur-learning-engineering-carnegie-mellon-will-open-source-its-digital-learning-software
In a 1967 article in Educational Record, he said that, compared to other organizations in America, colleges are run by amateurs, calling professors “almost completely untrained in the skills of professing: that is, of teaching.”
“We take the traditional organization of colleges so much for granted that we must step back and view them with Martian eyes, innocent of their history, to appreciate fully how outrageous their operation is,” he wrote. “If we visited an organization responsible for designing, building, and maintaining large bridges, we would expect to find employed there a number of trained and experienced professional engineers, thoroughly educated in mechanics and other laws of nature that determine whether a bridge will stand or fall. ... What do we find in a university? Physicists well educated in physics, and trained for research in that discipline; English professors learned in their language and its literature (or at least some tiny corner of it); and so on down the list of the disciplines. But we find no one with a professional knowledge of the laws of learning, or of the techniques of applying them.”
“Like Herb Simon said, we need to change higher ed from a solo sport to a collaborative research activity,” said Koedinger [Kenneth R. Koedinger, a professor of human computer interaction and psychology at Carnegie Mellon].
http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33692
http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=33692
16:54
'What if Finland's Great Teachers Taught in Your Schools?' Pasi Sahlberg - WISE 2013 Focus
https://www.youtube.com/watch?v=ERvh0hZ6uP8
https://www.youtube.com/watch?v=ERvh0hZ6uP8
WISE Channel
Published on Aug 8, 2014
Many governments are under political and economic pressure to turn their school systems around for higher rankings in the international league tables. Canada, South Korea, Singapore and Finland are commonly used models for the nations that hope to improve teaching and learning in their schools. In search of a silver bullet, reformers now turn their attention to teachers, believing that if only they could attract "the best and the brightest" into the teaching profession the quality of education would improve. This presentation argued that just having better teachers in schools will not automatically improve students' learning outcomes. Lessons from Finland and other high-performing school systems suggest that we should also protect schools from prescribed teaching, toxic accountability, and unhealthy competition, so that all teachers can use their professional knowledge and skills in the best interests of their pupils.
13:11
Top 10 Reasons FINLAND Has the World’s Best SCHOOL SYSTEM
https://www.youtube.com/watch?v=zmG4smezeME
https://www.youtube.com/watch?v=zmG4smezeME
TopTenz
Published on May 3, 2017
While it is almost impossible to say a single nation’s schools are the best in the world, one country that consistently performs extremely well on the Program for International Student Assessment (PISA) exams for math, reading and science, may come as a surprise to many. Finland, a tiny nation of 5.5 million people, consistently makes the top 5 performers across those categories, making it the top educational performer in Europe and one of the strongest in the world. (Singapore, Japan, and South Korea are also strong performers, and China did not submit consolidated results for the most recent test.) Finland?! What?!
10:21
Bill Gates: Teachers need real feedback
https://www.youtube.com/watch?v=81Ub0SMxZQo
https://www.youtube.com/watch?v=81Ub0SMxZQo
TED
Published on May 8, 2013
Until recently, many teachers only got one word of feedback a year: "satisfactory." And with no feedback, no coaching, there's just no way to improve. Bill Gates suggests that even great teachers can get better with smart feedback -- and lays out a program from his foundation to bring it to every classroom.
3:15
Thorndike Laws of Learning
https://www.youtube.com/watch?v=opt05kllJZw
https://www.youtube.com/watch?v=opt05kllJZw
Armavic Dyn Mamigo
Published on Sep 14, 2013
3:27
Edward Thorndike
https://www.youtube.com/watch?v=yEKM51c3lys
https://www.youtube.com/watch?v=yEKM51c3lys
Justin Dahlke
Published on Feb 17, 2014
11:00
Learning Theorist Biography: Edward L. Thorndike
https://www.youtube.com/watch?v=rCr0gFY0JlE
https://www.youtube.com/watch?v=rCr0gFY0JlE
Allison Wolfe
Published on Jun 15, 2011
4:18
Chinese Philosophy on Teaching and Learning XueJi
https://www.youtube.com/watch?v=XTosnIBeG_4
https://www.youtube.com/watch?v=XTosnIBeG_4
e-learning Education
Published on Feb 13, 2017
6:55
Piaget's Theory of Cognitive Development
https://www.youtube.com/watch?v=IhcgYgx7aAA
https://www.youtube.com/watch?v=IhcgYgx7aAA
Sprouts
Published on Aug 1, 2018
Piaget's theory argues that we have to conquer 4 stages of cognitive development:
(individual varied)
1. Sensori-Motor Stage (about age 0-2)
2. Pre-Operational Stage (about age 2-7)
3. Concrete Operational Stage (about age 7-11)
4. Formal Operational Stage (about age 12+)
Only once we have gone through all the stages, at what age can vary, we are able to reach full human intelligence.
7:09
Chinese Philosophy on Education
https://www.youtube.com/watch?v=4tgWXcyQJFA
https://www.youtube.com/watch?v=4tgWXcyQJFA
e-learning Education
Published on Feb 13, 2017
9:42
Be humble -- and other lessons from the philosophy of water | Raymond Tang
https://www.youtube.com/watch?v=OIlSXRC-B-I
https://www.youtube.com/watch?v=OIlSXRC-B-I
TED
Published on Mar 20, 2018
How do we find fulfillment in a world that's constantly changing? Raymond Tang struggled with this question until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water, an idea he's now applying to his everyday life. In this charming talk, he shares three lessons he's learned so far from the "philosophy of water." "What would water do?" Tang asks. "This simple and powerful question ... has changed my life for the better."
https://terebess.hu/english/tao/gia.html#Kap08
https://www.gutenberg.org/files/49965/49965-h/49965-h.htm
http://www.gutenberg.org/ebooks/7337
22:39
Chinese Philosophy An introduction to an introduction
https://www.youtube.com/watch?v=0N4bnv17Scg
https://www.youtube.com/watch?v=0N4bnv17Scg
Högskolan Dalarna
Published on Apr 18, 2011
1:26:58
Tao Te Ching for Everyday Life Tao Te Ching Philosophy Explained
https://www.youtube.com/watch?v=VKMyn_qjFDQ
https://www.youtube.com/watch?v=VKMyn_qjFDQ
Success Through Wisdom
Published on Aug 25, 2018
Tao Te Ching for Everyday Life Tao Te Ching Philosophy Explained
Tao Everyday Life Explained. Learn the Tao te ching (or Dao de ching) for beginners, in this simple audiobook.
Jordan Peterson On The Illuminati
([ paying attention is like watching for what you don't know ])
https://youtu.be/XnIFlD5Zvs8?t=99
https://youtu.be/XnIFlD5Zvs8?t=99
Clash of Ideas
Published on Oct 14, 2017
... ... ...
01:39 god Horus was the eye,
01:40 everyone knows the Eye of Horus, that
01:42 that image is so compelling that we
01:45 still know about, everybody has seen the
01:47 Eye of Horus with a really open pupil,
01:49 and what the Egyptians learned was that
01:51 the open eye was what revivified the
01:54 Dead Society, it's so smart, so what do
01:57 you do if your life isn't in order,
01:58 bloody well pay attention and that isn't
02:01 the same as thinking, it's a different
02:03 process paying attention, thinking is
02:05 like the imposition of structure in some
02:07 sense, I know I'm oversimplifying, but
02:10 paying attention is something like
02:11 watching for what you don't know,
02:13 and so like one of the things I often
02:15 recommend to my clinical clients if
02:16 they're having trouble with a family
02:18 member is number one, shut up, don't tell
02:21 them anything about yourself,
02:23 just and I don't mean in a rude way, it's
02:25 just like no more personal information,
02:26 number two watch them like a hawk and
02:30 listen, and if you do that long enough
02:32 they will tell you exactly what they're
02:34 up to and they will also tell you who
02:36 they think you are, and then you'll be
02:38 shock, because they think you're
02:39 something, generally speaking, that's not
02:41 like you what you are at all, and when
02:43 they tell you it's like a revelation to
02:45 both of you, but attention is an
02:47 unbelievably powerful force, and you see
02:49 this in psychotherapy too because a lot
02:51 of what you do, and in any reparative
02:53 relationship is really pay attention to
02:55 that other person, pay attention and listen
02:57 and you would not believe what people
03:00 will tell you or reveal to you if you
03:02 watch them as if you want to know
03:04 instead of watching them so that you'll
03:06 have your prejudices reinforced, that's
03:10 usually how people interact is like I
03:12 want to keep thinking about you the way
03:14 I'm thinking about you, and so I'm gonna
03:15 filter out anything that just proves my
03:18 theory, that's not what I'm talking about
03:19 at all it's like I'm gonna watch you and
03:21 figure out what you're up to not in a
03:23 rude way, none of that, I just want to see
03:26 what's there and that'll be good for you
03:28 probably and also be good for me and so
... ... ...
Jordan Peterson On The Illuminati
([ Jean Piaget ])
https://youtu.be/XnIFlD5Zvs8?t=744
https://youtu.be/XnIFlD5Zvs8?t=744
Clash of Ideas
Published on Oct 14, 2017
... ... ...
12:24 again by reading Jean Piaget, because one
12:26 of the things that Piaget said about
12:27 kids was that they first learned to play
12:29 a game but they don't know what the
12:31 rules are, meaning that if you have a
12:33 bunch of kids together they can play a
12:35 game, but if you take one of the kids out
12:37 of the game, when they're young, say six
12:38 and you say, what the rule are, what are
12:40 the rules, they can only sort of give you
12:42 a representation, so you take
12:44 six-year-old one, and he'll tell you some
12:46 of the rules, and six-year-old two will
12:47 tell you different rules, and and you
12:49 know six-year-old three will tell you
12:51 different rules, but if you put them all
12:52 together they can play, so they have the
12:55 knowledge embodied either individually
12:58 or in the group, the knowledge is there
12:59 to be extracted, well then they get a
13:02 little older, they can extract the rules
13:03 and then they start to play by the rules
13:06 and then the Piaget last step was, well
13:09 they doesn't just the kids play by the
13:10 rules, it's that they learned that they
13:11 can make the rules, and he thought about
13:13 that is moral progression, first you can
13:15 play, then you can play by the rules, then
13:17 you learn, maybe, because he didn't think
13:19 everyone learned this
13:20 that you're actually the master of the
13:21 rules, that doesn't mean the rules are
13:23 arbitrary, but it means that you can be
13:26 the generator of the rules, assuming
13:29 that you know how to play the game and
13:30 he thought about that as a moral moral
13:32 progression, and then I thought well
13:33 that's exactly what happened to Moses
... ... ...
Notes (on #education, #learning, #talent, #mind)
1. Jerome Bruner, The culture of education, 1996
https://drive.google.com/open?id=1iFT0g1m1dBAkkyTBngfPm4j5eacl4US8
https://drive.google.com/open?id=1iFT0g1m1dBAkkyTBngfPm4j5eacl4US8
2. Steve Casner, Careful, 2017
https://drive.google.com/open?id=1kzYudMcd1Z4S-rR_JXVzOBHt1xvhIYcx
([ p.28
sublte ([sp? subtle])
])
https://drive.google.com/open?id=1kzYudMcd1Z4S-rR_JXVzOBHt1xvhIYcx
3. Benedict Carey, How we learn, 2014
https://drive.google.com/open?id=13h_z6poFKPwBokKhYo1qOXMgV1Ol_sCb
https://drive.google.com/open?id=13h_z6poFKPwBokKhYo1qOXMgV1Ol_sCb
4. Daniel Coyle, The little book of talent, 2012
https://drive.google.com/open?id=1hj07vTyQKdUa5j588c0bFcBYhIiXH5jh
https://drive.google.com/open?id=1hj07vTyQKdUa5j588c0bFcBYhIiXH5jh
5. Elena Bodrova, Deborah J. Leong, Tools of the mind, 1996
https://drive.google.com/open?id=1EUjGUgimeRjJClkHyOYy5dqZkpnzsmBP
https://drive.google.com/open?id=1EUjGUgimeRjJClkHyOYy5dqZkpnzsmBP
The basic principles underlying the Vygotskian framework can be summarized as follows:
1. Children construct knowledge.
2. Development cannot be separated from its social context.
3. Learning can lead development.
4. Language plays a central role in mental development.
Subscribe to:
Comments (Atom)
how people think
44:21 we make every decision based on either fear or love. 44:25 Others say you make your decision based on fear of loss. 44:29 Whicheve...