Thursday, April 18, 2019

Nicholas Carr, Kai Fu Lee, Neil Postman


Nicholas Carr, 
([ 3 views on Artificial Intelligence ])
Andrew Ng, Kai Fu Lee, Yann LeCun, 
Neil Postman 


56:06
Nicholas Carr with Timothy Wu: The Glass Cage
https://youtu.be/m2JMDk3XoM0?t=16
https://youtu.be/m2JMDk3XoM0?t=16

https://www.youtube.com/watch?v=m2JMDk3XoM0
https://www.youtube.com/watch?v=m2JMDk3XoM0
92Y Plus
Published on Dec 16, 2014

29:18
Andrew Ng - The State of Artificial Intelligence
https://youtu.be/NKpuX_yzdYs?t=23
https://youtu.be/NKpuX_yzdYs?t=23
https://www.youtube.com/watch?v=NKpuX_yzdYs
https://www.youtube.com/watch?v=NKpuX_yzdYs
The Artificial Intelligence Channel
Published on Dec 15, 2017

50:06
Dr Kai-Fu Lee - The Future of Artificial Intelligence
https://www.youtube.com/watch?v=sGmuuppF8lo
https://www.youtube.com/watch?v=sGmuuppF8lo
The Artificial Intelligence Channel
Published on Sep 5, 2017
This talk took place on August 29, 2017

36:46
Yann LeCun - Power & Limits of Deep Learning
https://youtu.be/0tEhw5t6rhc?t=141
https://youtu.be/0tEhw5t6rhc?t=141
https://www.youtube.com/watch?v=0tEhw5t6rhc
https://www.youtube.com/watch?v=0tEhw5t6rhc
The Artificial Intelligence Channel
Published on Nov 19, 2017


ABC 
 A  AI - artificial intelligence (Machine learning, 
         Data mining, Deep Learning) 
         this neural network machine deep learning  
           https://en.wikipedia.org/wiki/Deep_learning 
         not this educational meaning of deep learning   
           https://en.wikipedia.org/wiki/Deeper_learning  
 B  Big data (Data science)  
         https://en.wikipedia.org/wiki/Big_data 
         https://en.wikipedia.org/wiki/Data_science 
 C  Cloud computing - Amazon Web Services (AWS), 
                      Microsoft Azure, 
                      IBM cloud, 
                      Google Cloud Platform (GCP), 

([
   6:48
   Theory of Constraints (TOC) 3 Bottle Oiled Wheels Demonstration
   https://youtu.be/mWh0cSsNmGY?t=16
   https://www.youtube.com/watch?v=mWh0cSsNmGY
   Arrie van Niekerk
   Published on Jul 5, 2012
    ])


    Technology
    Computer Sciences
October 16, 2015
System that replaces human intuition with algorithms outperforms human teams
by Larry Hardesty, Massachusetts Institute of Technology
http://phys.org/news/2015-10-human-intuition-algorithms-outperforms-teams.html
http://groups.csail.mit.edu/EVO-DesignOpt/groupWebSite/uploads/Site/DSAA_DSM_2015.pdf

Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which "features" of the data to analyze usually requires some human intuition.

"What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering," Veeramachaneni says. "The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas."


([
key ideas

   data science/ extraction of knowledge from data //  
   the order in which we learn things does matter //
   learn simple concept before learning more complex/abstract ones //  
   covariance/ structural homology/
   homology/ the existence of shared ancestry between a pair of structures //
   ontology/ where to put things/ ways to organize things //
   The catastrophes are the surprises: all else is mere repetition //

   morphology -
             1. the branch of biology that deals with the form and
                 structure of animals and plants
             3. any scientific study of form and structure,
                 as in physical geography
             4. form and structure, as of an organism, regarded as a whole //

     • "Data Science", which is the automatic (or semi-automatic) extraction of knowledge from data.;── Yann LeCun (self.MachineLearning).   
     •  the goal of extracting information from data.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman. 
     •  ... and thus discover something about data that will be seen in the future.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman. 
     •  All algorithms for analysis of data are designed to produce a useful summary of the data, from which decisions are made.;── Jure Leskovec, Anand Rajaraman, Jeffrey D. Ullman, Mining of massive datasets, 2010.'; http://infolab.stanford.edu/~ullman/mmds/book.pdf

   data science/ DS/ machine learning/ ML/ unsupervised feature learning/
   unsupervised learning/ computer science/ CS/ science fiction/ SF/ sci-fi/
   fantasy/ fa/ fiction/ fi/ Finland/ fi/ reinforcement learning/ RL/   
   deep learning/ DL/ artificial intelligence/ AI/ expert systems/
   representation learning/ RL/ 

   natural language processing / NLP/ 
   Natural Language Speech Recognition/ NLSR/ 
   Speaker-Independent Natural Language Speech Recognition/ SI-NLSR or NLSR/

Gary Klein, Sources of power : how people make decision, 1998    [ ]

p.276
One definition of uncertainty (paraphrasing Lipshitz and Shaul 1997) is “doubt that threatens to block action”. Key pieces of information are missing, unreliable, ambiguous, inconsistent, or too complex to interpret, and as a result a decision maker will be reluctant to act. In many cases, the action will be delayed or will be overtaken by events as windows of opportunity close.

p.277
   Schmitt and Klein (1996) identified four sources of uncertainty:

   1. Missing information. Information is unavailable. It has not been received or has been received but cannot be located when needed.

   2. Unreliable information. The credibility of the source is low, or is perceived to be low even if the information is highly accurate.

   3. Ambiguous or conflicting information. There is more than one reasonable way to interpret the information.

   4. Complex information. It is difficult to integrate the different facets of the data.

   We can also identify several different levels of uncertainty: the level of data; the level of knowledge, in which inferences are drawn about the data; and the level of understanding, in which the inferences are synthesized into projections of the future, into diagnoses and explanations of events.

   (Klein, Gary, Sources of power : how people make decision / Gary Klein., 1. decision-making., 1998, 2001, 685.403, MIT Press, )
    ])


19:47
Automation: Last Week Tonight with John Oliver (HBO)
https://www.youtube.com/watch?v=_h1ooyyFkF0
https://www.youtube.com/watch?v=_h1ooyyFkF0
LastWeekTonight
Published on Mar 3, 2019

([
    „Machine learning is a mathematical technique for training computer systems to make accurate predictions from a large corpus of training data, with a degree of accuracy that in some domains can mimic human cognition.“
    —— Maciej Ceglowski,
       May 7, 2019,
       US Senate Committee on Banking, Housing, and Urban Affairs
       on Privacy Rights and Data Collection in a Digital Economy


<< long read - scroll down to skip this section >>
Maciej Ceglowski's Senate testimony on Privacy Rights and Data Collection in a Digital Economy
 

 May 7, 2019,
Senate Committee on Banking, Housing, and Urban Affairs
Privacy Rights and Data Collection in a Digital Economy (Senate hearing)
    privacy
    pinboard
    regulation
    gdpr
    long read

https://idlewords.com/talks/senate_testimony.2019.5.htm
 

Consent in a world of inference

For example, imagine that an algorithm could inspect your online purchasing history and, with high confidence, infer that you suffer from an anxiety disorder. Ordinarily, this kind of sensitive medical information would be protected by HIPAA, but is the inference similarly protected? What if the algorithm is only reasonably certain? What if the algorithm knows that you’re healthy now, but will suffer from such a disorder in the future?

The question is not hypothetical—a 2017 study showed that a machine learning algorithm examining photos posted to the image-sharing site Instagram was able to detect signs of depression before it was diagnosed in the subjects, and outperformed medical doctors on the task.

Addendum: Machine Learning and Privacy

Machine learning is a mathematical technique for training computer systems to make accurate predictions from a large corpus of training data, with a degree of accuracy that in some domains can mimic human cognition.

For example, machine learning algorithms trained on a sufficiently large data set can learn to identify objects in photographs with a high degree of accuracy, transcribe spoken language to text, translate texts between languages, or flag anomalous behavior on a surveillance videotape.

The mathematical techniques underpinning machine learning, like convolutional neural networks (CNN), have been well-known since before the revolution in machine learning that took place beginning in 2012. What enabled the key breakthrough in machine learning was the arrival of truly large collections of data, along with concomitant [
accompanies or is collaterally connected with] computing power, allowing these techniques to finally demonstrate their full potential.

It takes data sets of millions or billions of items, along with considerable computing power, to get adequate results from a machine learning algorithms. Before the advent of the surveillance economy, we simply did not realize the power of these techniques when applied at scale.

Because machine learning has a voracious appetite for data and computing power, it contributes both to the centralizing tendency that has consolidated the tech industry, and to the pressure companies face to maximize the collection of user data.

Machine learning models poses some unique problems in privacy regulation because of the way they can obscure the links between the data used to train them and their ultimate behavior.

A key feature of machine learning is that it occurs in separable phases. An initial training phase consists of running a learning algorithm on a large collection of labeled data (a time and computation-intensive process). This model can then be deployed in an exploitation phase, which requires far fewer resources.

Once the training phase is complete, the data used to train the model is no longer required and can conceivably be thrown away.

The two phases of training and exploitation can occur far away from each other both in space and time. The legal status of models trained on personal data under privacy laws like the GDPR, or whether data transfer laws apply to moving a trained model across jurisdictions, is not clear.

Inspecting a trained model reveals nothing about the data that went into it. To a human inspecting it, the model consists of millions and millions of numeric weights that have no obvious meaning, or relationship to human categories of thought. One cannot examine an image recognition model, for example, and point to the numbers that encode ‘apple’.

The training process behaves as a kind of one-way function. It is not possible to run a trained model backwards to reconstruct the input data; nor is it possible to “untrain” a model so that it will forget a specific part of its input.

Machine learning algorithms are best understood as inference engines. They find structure and excel at making inferences from data that can sometimes be surprising even to people familiar with the technology. This ability to see patterns that humans don’t notice has led to interest in using machine learning algorithms in medical diagnosis, evaluating insurance risk, assigning credit scores, stock trading, and other fields that currently rely on expert human analysis.

The opacity of machine learning models, combined with this capacity for inference, also make them an ideal technology for circumventing legal protections on data use. In this spirit, I have previously referred to machine learning as “money laundering for bias”. Whatever latent biases are in the training data, whether or not they are apparent to humans, and whether or not attempts are made to remove them from the data set, will be reflected in the behavior of the model.

A final feature of machine learning is that it is curiously vulnerable to adversarial inputs. For example, an image classifier that correctly identifies a picture of a horse might reclassify the same image as an apple, sailboat or any other object of an attacker’s choosing if they can manipulate even one pixel in the image. Changes in input data not noticeable to a human observer will be sufficient to persuade the model. Recent research suggests that this property is an inherent and ineradicable feature of any machine learning system that uses current approaches.

In brief, machine learning is effective, has an enormous appetite for data, requires large computational resources, makes decisions that resist analysis, excels at finding latent structure in data, obscures the link between source data and outcomes, defies many human intuitions, and is readily fooled by a knowledgeable adversary.

—Maciej Ceglowski, 2019

source:
https://tildes.net/~tech

    ])


([
   1:27:10
Complete Machine Learning Course for Beginners || Machine Learning Tutorial for Beginners
https://youtu.be/J1_A-rdNBNQ
https://youtu.be/J1_A-rdNBNQ
https://www.youtube.com/watch?v=J1_A-rdNBNQ
https://www.youtube.com/watch?v=J1_A-rdNBNQ
Geek's Lesson
Published on Jun 13, 2018
    ])


  ▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀

1:32:12
Eric Schmidt in conversation with James Cameron
https://youtu.be/dBse9xISbXQ?t=1119
https://youtu.be/dBse9xISbXQ?t=1119
https://youtu.be/dBse9xISbXQ?t=855
https://youtu.be/dBse9xISbXQ?t=855
https://www.youtube.com/watch?v=dBse9xISbXQ
https://www.youtube.com/watch?v=dBse9xISbXQ
Google
Published on Oct 18, 2011



Saturday interview
 Joseph Stiglitz on artificial intelligence: 'We’re going towards a more divided society'
    The technology could vastly improve lives, the economist says – but only if the tech titans that control it are properly regulated. ‘What we have now is totally inadequate’
 by Ian Sample Science editor
   Sat 8 Sep 2018 02.00 EDT
https://www.theguardian.com/technology/2018/sep/08/joseph-stiglitz-on-artificial-intelligence-were-going-towards-a-more-divided-society
Joseph Stiglitz 
    Stiglitz poses a question that he suspects tech firms have faced internally. “Which is the easier way to make a buck: figuring out a better way to exploit somebody, or making a better product? With the new AI, it looks like the answer is finding a better way to exploit somebody.” 


The 2018 AAAI Spring Symposium Series
The Potential Social Impact of the Artificial Intelligence Divide
Andrew B. Williams
Humanoid Engineering & Intelligent Robotics Lab
University of Kansas
andrew.williams@ku.edu
https://aaai.org/ocs/index.php/SSS/SSS18/paper/download/17539/15382
A person’s dignity is closely tied to their ability to work and contribute to society and their family’s well-being. 


Kelly, Kevin, 1952—
What technology wants / Kevin Kelly,
1. technology—social aspects.
2. technology and civilization.

T14.5.K45 2010
303.48'3—dc22

copyright © 2010

https://drive.google.com/open?id=1dDEBRwp3XyIKgu_bUyp18nf_9OOmpyyJ
pp.11—12
    I dislike inventing words that no one else uses, but in this case all known alternatives fail to convey the required scope.  So I've somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us.  I call it the technium.  The technium extends beyond shiny hardware include culture, art, social institution, and intellectual creations of all types.  It includes intangibles like software, law, and philosophical concepts.  And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections.  For the rest of this book I will use the term technium where others might use technology as a plural, and to mean a whole system (as in "technology accelerates").  I reserve the term technology to mean a specific technology, such as radar or plastic polymers.  For example, I would say: "The technium accelerates the invention of technologies."  In other words, technologies can be patented, while the technium includes the patent system itself. 

([
    “ Once a development path is set on a particular course, then network externalities, the learning process of organizations, and the historical derived modelling of the issues reinforces the course.”
         ― Douglas North
     • Uncertainty and path dependence
        • https://www.youtube.com/watch?v=KKfkQW7_-Pg
        • https://www.youtube.com/watch?v=KKfkQW7_-Pg

1:42:42
The Predictioneer's Game
42:47 (start)
https://youtu.be/XfE0ih-6fi8?t=2567
https://youtu.be/XfE0ih-6fi8?t=2567
44:00 (stop) 


0:16    (start of talk)
https://youtu.be/XfE0ih-6fi8?t=16
1:07:10 (end of talk)
https://youtu.be/XfE0ih-6fi8?t=4030 


https://www.youtube.com/watch?v=XfE0ih-6fi8
https://www.youtube.com/watch?v=XfE0ih-6fi8
NYUAD Institute
Published on Sep 15, 2015
The Predictioneer's Game
December 9, 2009
Bruce Bueno de Mesquita will discuss how applied game theory can be used to anticipate policy choices whether in business or in government.


The Predictioneer's Game
https://slideplayer.com/slide/4437069/ 


    ])

  ▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀

 p.194
Often we will invent a machine for a particular and limited purpose, and then, in what Neil Postman calls the Frankenstein syndrome, the invention's own agenda blossoms.  "Once the machine is built," Postman writes, "we discover, always to our surprise——that it has ideas of its own; that it is quite capable not only of changing our habits but ... of changing our habits of mind."  In this way, humans have become an adjunct to or, in Karl Marx's phrase, appendages of the machine.

p.196
In 1997, I interviewed [George] Lucas ... .  ...  [...]  ...  I asked him, "Do you think technology is making the world better or worse"  Lucas's answer:

     If you watch the curve of science and everything we know, it
     shoots up like a rocket.  We're on this rocket and we're going
     perfectly vertical into the stars.  But the emotional intelligence
     of humankind is equally if not more important than
     our intellectual intelligence.  We're just as emotionally illiterate
     as we were 5,000 years ago; so emotionally our line is
     completely horizontal.  The problem is the horizontal and 
     the vertical are getting farther and farther apart.  And as
     these things grow apart, there's going to be some kind of
     consequence of that.

I think we underestimate the strain of that gap.

    (Kelly, Kevin, 1952—, T14.5.K45 2010, 303.48'3—dc22, copyright © 2010)
(What technology wants / Kevin Kelly, 1. technology—social aspects., 2. technology and civilization., )


([
https://en.wikipedia.org/wiki/Agrarian_society
https://en.wikipedia.org/wiki/Industrial_Revolution
https://en.wikipedia.org/wiki/The_Second_Machine_Age

https://en.wikipedia.org/wiki/Agrarian_society
   12:05
   Mesopotamia: Crash Course World History #3
   https://www.youtube.com/watch?v=sohXPx_XZ6Y
   https://www.youtube.com/watch?v=sohXPx_XZ6Y
   CrashCourse
   Published on Feb 9, 2012

https://en.wikipedia.org/wiki/Industrial_Revolution
   11:04
   Coal, Steam, and The Industrial Revolution: Crash Course World History #32
   https://www.youtube.com/watch?v=zhL5DCizj5c
   https://www.youtube.com/watch?v=zhL5DCizj5c
   CrashCourse
   Published on Aug 30, 2012


   12:31
   The Industrial Economy: Crash Course US History #23
   https://www.youtube.com/watch?v=r6tRp-zRUJs
   https://www.youtube.com/watch?v=r6tRp-zRUJs
   CrashCourse
   Published on Jul 25, 2013 


General Purpose Technologies "Engines of Growth?"
Timothy F. Bresnahan, Manuel Trajtenberg
NBER Working Paper No. 4148 (Also Reprint No. r2008)
Issued in August 1992
https://www.nber.org/papers/w4148
https://www.nber.org/papers/w4148.pdf

Friedman, Thomas L.
The world is flat : a brief history of the 21st century / Thomas L. Friedman -- 1st rev. and expanded ed.
1. diffusion of innovations
2. information society
3. globalization--economic aspects
4. globalization--social aspects
2005, 2006
303.4833
https://drive.google.com/open?id=15SPosAZbzOeMG2INpHKm6-iXCrBAguSp

pp.204-205, p.207, p.208, p.208
  pp.204-205
   As Stanford University economist Paul Romer pointed out, economists have known for a long time that “there are goods that are complementary -- whereby good A is a lot more valuable if you also have good B. It was good to have paper and then it was good to have pencils, and soon as you got more of one you got more of the other, and as you got a better quality of one and better quality of the other, your productivity improved. This is known as the simultaneous improvement of complementary goods.”

  p.207
1989 essay, Computer and Dynamo: the modern productivity paradox in a not-too distant mirror, economic historian Paul A. David
   In a pathbreaking 1989 essay, “Computer and Dynamo: the modern productivity paradox in a not-too distant mirror,” the economic historian Paul A. David explained such a lag by pointing to a historical precedent. He noted that while the lightbulb was invented in 1879, it took several decades for electrification to kick in and have a big economic and productivity impact. Why? Because it was not enough just to install electric motors and scrap the old technology -- steam engines. The whole way of doing manufacturing had to be reconfigured. In the case of electricity, David pointed out, the key breakthrough was in how buildings, and assembly lines, were redesigned and managed. Factories in the steam age tended to be heavy, costly multistory buildings designed to brace the weighty belts and other big transmission devices needed to drive steam-powered systems. Once small, powerful electric motors were introduced, everyone hoped for a quick productivity boost. It took time, though. To get all the savings, you needed to redesign enough buildings. You needed to have long, low, cheaper-to-build single-story factories, with small electric motors powering machines of all sizes. Only when there was a critical mass of experienced factory architects and electrical engineers and managers, who understood the complementaries among the electric motor, the redesign of the factory, and the redesign of the production line, did electrification really deliver the productivity breakthrough in manufacturing, David wrote.

  p.208
Many of the 10 flatteners have been around for years. But for the full flattening effects to be felt, we needed not only the 10 flatteners to converge but also something else. We needed the emergence of a large cadre of mangers, innovators, business consultants, business schools, designers, IT specialists, CEOs, and workers to get comfortable with, and develop, the sorts of horizontal collaboration and value-creation processes and habits that could take advantage of this new, flatter playing field. In short, the convergence of the 10 flatteners begat the convergence of a set of business practices and skills that would get the most out of the flat world. And then the two began to mutually reinforce each other.

  p.208
Stanford University economist Paul Romer
   “When people asked, ‘Why didn't the IT [Information Technology - computers, embedded computing device, mass storage, software, networking, Apple iphone and other like technology, having a processing unit and programmable] revolution lead to more productivity right away?’ it was because you needed more than just new computers,” said Romer. “You needed new business processes and new types of skills to go with them. The new way of doing things makes the information technologies more valuable, and the new and better information technologies make the new ways of doing things more possible.”

     (Friedman, Thomas L., The world is flat : a brief history of the 21st century / Thomas L. Friedman -- 1st rev. and expanded ed., 1. diffusion of innovations, 2. information society, 3. globalization--economic aspects, 4. globalization--social aspects, 2005, 2006, 303.4833, pp.204-205, p.207, p.208, p.208)

    ]) 

  ▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀
 
Nicholas Carr's blog 
On autopilot: the dangers of over automation

The grounding of Boeing’s popular new 737 Max 8 planes, after two recent crashes, has placed a new focus on flight automation. Here’s an excerpt from my [Nicholas Carr] 2014 book on automation and its human consequences, The Glass Cage, that seems relevant to the discussion.

http://www.roughtype.com/?p=8622

Rory Kay, a long-time United Airlines captain who until recently served as the top safety official with the Air Line Pilots Association, fears the aviation industry is suffering from “automation addiction.” In a 2011 interview, he put the problem in stark terms: “We’re forgetting how to fly.” 

  ▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀

28:35
Neil Postman Are We Amusing Ourselves to Death Part I, Dec. 1985
https://www.youtube.com/watch?v=FRabb6_Gr2Y
https://www.youtube.com/watch?v=FRabb6_Gr2Y
ashikmlakonja
Published on Dec 18, 2011

28:54
Neil Postman Are We Amusing Ourselves to Death Part II, Jan. 1986
https://www.youtube.com/watch?v=zHd31L6XPEQ
https://www.youtube.com/watch?v=zHd31L6XPEQ
ashikmlakonja
Published on Dec 19, 2011

1:25:12
College Lecture Series - Neil Postman - "The Surrender of Culture to Technology"
https://youtu.be/hlrv7DIHllE?t=173
https://youtu.be/hlrv7DIHllE?t=173
https://www.youtube.com/watch?v=hlrv7DIHllE
https://www.youtube.com/watch?v=hlrv7DIHllE
College of DuPage
Published on Jun 3, 2013
A lecture delivered by Neil Postman on Mar. 11, 1997 in the Arts Center. Based on the author's book of the same title. Neil Postman notes the dependence of Americans on technological advances for their own security. Americans have come to expect technological innovations to solve the larger problems of mankind. Technology itself has become a national "religion" which people take on faith as the solution to their problems.

7 questions
 1. what is the problem to which this technology is a solution?
 2. whose problem is it?
 3. suppose we solve this problem, and solve it decisively, what new problems might be created because we have solved the problem?
 4. which people and what institutions might be most seriously harmed by a technological solution
 5. what changes in language are being enforced by new technologies?
    what is being gained and what is being lost by such changes?
 6. what sort of people and institution acquire special economic and political power, because of technological change?
    this question needs to be asked, because the transformation of a technology into medium always results in a realignment of economic and political power.
 7. what alternative uses might be made of a technology the one proceeds here by assuming that any medium we have created is not necessarily the only one we might make of a particular technology

 https://youtu.be/hlrv7DIHllE?t=1035
 1. what is the problem to which this technology is a solution?
    now this question needs to be asked, because there are technologies that are not solution to any problem that a normal person would regard as significant

 https://youtu.be/hlrv7DIHllE?t=1440
 2. whose problem is it? 

    but this question, whose problem is it, needs to be applied to any technologies. most technologies do solve some problem, but the problem may not be everybody's problem  or even most people's problem.  we need to be very careful in determining who will benefit from a technology, and who will pay for it.  they are not always the same people.  

 https://youtu.be/hlrv7DIHllE?t=1521
 3. suppose we solve this problem, and solve it decisively, what new problems might be created because we have solved the problem?
    the automobile solves some very important problems for most people

 https://youtu.be/hlrv7DIHllE?t=1740
 4. which people and what institutions might be most seriously harmed by a technological solution

 https://youtu.be/hlrv7DIHllE?t=2259
 5. what changes in language are being enforced by new technologies?
    what is being gained and what is being lost by such changes?

 https://youtu.be/hlrv7DIHllE?t=2746
 6. what sort of people and institution acquire special economic and political power, because of technological change?
    this question needs to be asked, because the transformation of a technology into medium always results in a realignment of economic and political power.

 https://youtu.be/hlrv7DIHllE?t=2925
 7. what alternative uses might be made of a technology the one proceeds here by assuming that any medium we have created is not necessarily the only one we might make of a particular technology

 https://youtu.be/hlrv7DIHllE?t=3037
 1. what is the problem to which a technology claims to be the solution
 2. whose problem is it
 3. what new problems will be created because of solving an old one
 4. which people in institutions will be most harmed
 5. what changes in language are being promoted
 6. what shifts in economic and political power are likely to result
 7. what alternative media might be made from a technology 



automobile, television, computer
the same blindness, no one is asking anything worth asking         

 https://youtu.be/hlrv7DIHllE?t=3629
  60:29   Tocqueville says in democracy in America 


([
the video above has really low volume, without using a headphone, if you want to boost your laptop or pc speaker loudness level, check out the following video

TIL how to make video louder in windows 10

https://www.google.com/search?&q=how+to+make+video+louder+in+windows+10

this was the result
How to Increase the Maximum Volume in Windows 10 - YouTube
https://www.youtube.com/watch?v=R1sv5bsC6cg
https://www.youtube.com/watch?v=R1sv5bsC6cg

I access the same feature this way:
   right click on the 'Speaker icon' on lower right hand corner of the screen located on the taskbar
   a pop-up menu should show up
   select 'Sounds' 
   a window should pop-up, named Sound
   click the 'Playback' tab ([1st tab reading from left to right]) 
   select 'Speakers'
   click on the 'Properies' button ([lower right corner of window])
   a window should pop-up, named Speakers Properties
   click on 'Enhancements' tab
   check box 'Loudness Equalization'
   click on the 'Apply' button ([bottom right of window])
   OK
   OK
     ])

No comments:

Post a Comment

how people think

44:21  we make every decision based on either fear or love. 44:25  Others say you make your decision based on fear of loss. 44:29  Whicheve...