Media Platforms: Fundamentals and Harm

What is a media platform? Where do they come from? And, why should we think critically about them?

Digital media platforms originated from the mass media culture of the 20th century (such as television), but also the engineering of computational devices (such as surveillance devices) (Carah). Media platforms are a network of relations between machines, humans, and the environment based on data, code, algorithms, human activity, social relationships, and more (Carah).

Media platforms are data-driven, meaning that their purpose is to collect and process information to act upon (Carah). Therefore, the experiences and engagement users have on media platforms are organised by the algorithms inherit to these devices (Carah).

We should think critically about media platforms because media platforms are linked to our environments and infrastructures of being, and act as our habits and materials that allow us to express ourselves (Carah). Platforms are so intertwined with our lives that it is important to recognise what influences these platforms that collect, process, and use our information (Aliamo & Kallinikos; Carah).

For example, platforms such as YouTube use algorithms to increase engagement and connectivity to keep people engaged on the platform (see Fig. 1) (Cooper).

Figure 1. Sritan Motati. “Simplified diagram of the recommender system used by YouTube.” How YouTube Knows What You Want to Watch, 14 Mar. 2021, https://medium.com/techtalkers/how-youtube-knows-what-you-want-to-watch- 212a24d79f49

As seen in the above diagram, the algorithm recommends videos to users based on what videos other users have engaged with to attempt to keep a user’s attention on the platform for as long as possible (Cooper). This aligns with the argument that media platforms are economically, culturally, and politically powerful (Carah).

Therefore, it is important to think critically about media platforms to understand how they use our habits and engagement to fulfil a goal – such as how YouTube aims to keep users engage with content to keep them on the platform, to ultimately make a profit by showing them ads (Cooper).

Hicks provides another example of the importance of critically thinking about platforms by exploring the history of online dating. Hicks explains how algorithms initially aimed to match people based upon the patriarchal idea that women were restricted in society and needed to marry a professional man to gain status. However, as hook-up culture developed and societal expectations continue to shift, it is important to consider how these platforms also shift as they are influenced by the people who make them and their biases (Hicks).

What is simulation and optimisation of media platforms?

Simulation refers to images, words, or sequences produced by a code based upon data and information that has been collected and processed (Carah). Simulation technologies are predictive, observing what people have done in the past or are currently doing to build simulations that set the coordinates of the task that the program will complete (Carah). For example, thispersondoesnotexist.com showcases images of people that look ‘real’ but don’t represent a ‘real body’ in the world (see Fig. 3) (Carah).

Figure 2. Michael Reilly & Keon Parsa. “A Collage of AI-generated faces offered for sale.” Can You Tell Who Is Real or Fake Just form a Picture?” Psychology Today, 21 Jan. 2021, https://www.psychologytoday.com/au/blog/dissecting-plastic- surgery/202101/can-you-tell-who-is-real-or-fake-just-picture

Instead, an image is produced by an automated data-driven model due to data received, and a simultaneous algorithm can recognise a person in the simulated image. This is why, as seen in Fig. 3, errors can also occur within the images created (Carah). However, simulated algorithms can learn, and the more they learn and perform their requested task, the more accurate they become (Carah).

CGI Instagram influencers are another example of simulation (see Fig. 3) (Carah).

Figure 3. Nikki Gilliland. “CGI Virtual Influencer Shudu Posting for Fenty Beauty on Instagram.” Are Virtual stars the next step for influencer marketing? 13 Feb. 2018, https://econsultancy.com/are-virtual-stars-the-next-step-for-influencer- marketing/

These simulated influencers are considered ‘easier’ to work with than people, as they do whatever they are simulated to do as seen in Fig. 3 (Carah). They can contour to any pose, wear any clothes, and be in any location – giving full control over them to their producers (Carah; GRIN). An example includes Shudu, who is a digital supermodel that companies such as Fenty Beauty, Vogue, Harper’s Bazaar, and The New Yorker have collaborated with to promote their merchandise (GRIN).

Optimisation then refers to creating the most accurate and efficient algorithm possible (Carah). The process involves decision-making at multiple layers including data collection, data cleaning and coding, data processing, prediction and decision-making, and application (Carah). The below image showcases the schematics of an Amazon Echo (see Fig. 4) (Crawford & Joler).

Figure 4. Kate Crawford & Vladan Joler. “Amazon Echo Dot (schematics).” Anatomy of an AI System, 7 Sep. 2018, https://anatomyof.ai/

Even in the most common interaction (command and response), many features are being used including optimisation, as the schematics of a device work to complete a task as seen in the above image (Crawford & Joler). Crawford and Joler state that the bottom of the map in Fig 5. shows the history of human capacity and knowledge, which Amazon uses to then train the Echo device to become as optimised as possible.

Figure 5. Kate Crawford & Vladan Joler. “Anatomy of an AI System.” Anatomy of an AI System, 7 Sep. 2018, https://anatomyof.ai/

Amazon’s Alexa is also being trained to interpret commands more precisely to trigger actions that map to the user’s commands more accurately to build a complete model of the user’s preferences, habits, and desires (Carah). This optimisation is made possible because these devices rely upon the assimilation, analysis and optimisation of several human- generated images, texts, and videos powered by the extraction of non-renewable materials, labour, and data to work as accurately and efficiently as possible (Crawford & Joler).

How do algorithms cause harm (and what can we do about it?)

Algorithms are created by humans and therefore are influenced by biases (Crawford & Paglen). Therefore, it is important to ask whose ideas and biases algorithms are being built around (Carah). Algorithms are supposed to remain hidden in nature but are political because they are based upon assumptions and these assumptions can cause discrimination (Crawford & Paglen). It is important to understand these assumptions to avoid reinforcing histories of racial and gender discrimination that cause harm (Carah; Crawford & Paglen). This is referred to as harm of allocation and representation (Carah).

Allocation is immediate harm such as economic or quantifiable, while representation is long-term harm such as cultural or a harm that is difficult to formalise (Carah). These allocations are based upon classifications that are made by making judgements about a range of data such as features, predictors, and variables (Carah). To make this happen, a model is created and ‘trained’ using ‘training data’ (Carah). After the model is built it is then ‘tested’ with previously unseen ‘test data’ (Carah).

An example of these algorithms causing harm comes from 2015, when it was revealed that Google Images would show only 11% of images containing women when “CEO” was searched. This did not reflect the 27% of women who were CEOs in the USA at the time (see Fig. 6) (Carah).

Figure 6. Jennifer Langston. “Percentage of women in top 100 Google image search results for CEO: 11%. Percentage of U.S. CEOs who are women: 27%.” Who’s a CEO? Google images results can shift gender bias, 9 Apr. 2017, https://www.washington.edu/news/2015/04/09/whos-a-ceo-google-image-results-can-shift-gender-biases/

A few months later, another study then revealed that Google advertisements for high-income jobs were shown much more often to men than women (Carah). Although Google does not allow advertisers to discriminate when they are optimising their ads, platforms such as Facebook do allow their users to avoid types of people being shown an advertisement by choosing which classifications associated with a category of people, they don’t want to be seeing their ads during the optimisation process (Carah; Bucher).

Additionally, it was suggested that past behaviour of users taught the algorithm that men clicked more on these high-income ads than women, and therefore learned to only show these adverts to men (Carah). These examples reinforce gendered discrimination (Carah).

To combat the harm that algorithms can cause, there needs to be algorithmic accountability (Carah). This means that algorithms need to be made transparent, observable and accountable to the public (Carah).

Discussion of: Excavating AI – The Politics of Images in Machine Learning Training Sets by Kate Crawford and Trevor Paglen

ImageNet is a common training set of machine learning used to train articulate intelligence systems (Crawford & Paglen). It perfectly showcases how algorithms are not infallible, and their biases can cause harm (Crawford & Paglen). Although many images in the database are labelled accurately, it becomes clear looking through the classifications of images including people whose assumptions have been taught to the algorithm (see Fig. 7) (Crawford & Paglen).

Figure 7. Kate Crawford and Trevor Paglen. “Excavating AI: The Politics of Training Sets of Machine Learning.” Excavating AI, 19 Sep. 2019. https://excavating.ai

Fig. 7 showcases images that are labelled as ‘woman’, but the same images are also labelled as ‘ball-buster’ and ‘ball-breaker’ – defined in the figure as a ‘demanding woman who destroys men’s confidence – without any discernible reason (Crawford & Paglen).

Because machine systems are programmed by humans, they are not free from assumptions and biases (Carah; Crawford & Paglen). Therefore, it is important to consider the effects of these assumptions upon the data sets used to train AI devices to avoid harm to allocations (Carah; Crawford & Paglen).

These allocations are based upon classifications made by making decisions about a range of data such as features, predictors and variables (see Fig. 8 for an example of classifications and branches of continuing sub-categories) (Crawford & Paglen).

Figure 8. Tengqi Ye. “A snapshot of two root-to-leaf branches of ImageNet: the top row is from the mammal sub-tree; the bottom row is from the vehicle sub-tree. For each synset, 6 randomly samples images are presented in the Figure.” Visual Object Detection from Lifelogs using Visual Non-lifelog Data, 10 Jan. 2018. doi: 10.13140/RG.2.2.18463.46248

This is done by ‘training’ an AI by using ‘training data’ (Crawford & Paglen). After the model is built it is then ‘tested’ with new ‘test data’ (Crawford & Paglen).

Therefore, training automated interpretation of images is a social and political concern (Crawford & Paglen). This means that it is important to understand the politics within AI systems as they integrate into social frameworks to understand how they influence our decisions (Crawford & Paglen). This is especially important as these programs are now being used by companies to decide job interview offers, criminal demographics, and more (Crawford & Paglen).

Many harmful unexamined assumptions are exposed by analysing how these training sets work (Crawford & Paglen). This is because presumptions always influence how an AI system functions (Crawford & Paglen).

One way in which AI systems are ‘trained’ involve what’s called ‘deep learning’ or ‘deep neural networks’ (Crawford & Paglen). Deep learning is dominant in training AIs because it is driven by major increases in available data and computer processing power (Crawford & Paglen). Deep learning approaches can also be supervised or unsupervised, where a network is given as many examples as possible to ‘train’ a task (Crawford & Paglen).

These approaches are then used for classification tasks where it is difficult to describe features (such as training an AI to learn the difference between handwriting and a car numberplate) (Crawford & Paglen). With these approaches, each ‘layer’ of the network also adds another ‘distinction’ that the network can start to make, however, the human user cannot know how the network configured its decision-making if unsupervised (Crawford & Paglen).

If all training goes well, the trained AI will then be able to distinguish between images that it has never seen before, including inanimate objects as seen in Fig. 9 (Crawford & Paglen). Meanwhile, while the AI distinguishes images, on the software side algorithms perform a statistical survey of images to develop a model to determine the differences between the two ‘classes’ (Crawford & Paglen).

Figure 9. Kate Crawford and Trevor Paglen. “Determining what’s in an image.” Excavating AI, 19 Sep. 2019. https://excavating.ai

Despite the belief that AI and the data it utilises classify the world objectively and scientifically, politics, ideologies, prejudices, and other subjective elements are evident in AI (Crawford & Paglen). In fact, it has been determined that it is more common for an AI to be influenced by biases than not (Crawford & Paglen).

Varied training sets may also have different goals and architectural designs that need to be considered when exploring algorithm biases (Crawford & Paglen). But training sets for imaging systems all still have some features in common (Crawford & Plagen). They are fundamentally a group of photos that have been categorised and labelled in a variety of ways (Crawford & Paglen).

As a result, the sets overall structure is made up of three layers as seen in Fig. 10: overall taxonomy (the collection of classes and, if applicable, their hierarchical nesting), individual classes (the distinct categories into which images are grouped), and each individually labelled image. And politics leak into every layer of the design of a training set (Crawford & Paglen).

Figure 10. Kate Crawford and Trevor Paglen. “Taxonomy of categories.” Excavating AI, 19 Sep. 2019. https://excavating.ai

On the level down from taxonomy, there is the level of the class. Taxonomy states that ‘emotions’ is a group of visual notions (Crawford & Paglen). Therefore, at the level of the class, there are assumptions that ‘neutral’ facial expressions exist, and that there are six intense emotional states including surprised, anxious, nauseated, irate, pitiful, and cheerful (Crawford & Paglen). At the level of the labelled image, there are then other certain assumptions found such as ‘this photo describes a lady with a cheerful facial expression’ (Crawford & Paglen).

However, this is not accurate as the lady in the photo really is only mirroring a happy face (Crawford & Paglen). These images contain facial expressions that are being ‘performed,’ and therefore do not represent a person’s actual interior state since they’re only acting out facial expressions in a research facility (Crawford & Paglen). Each one of the implicit claims made at each level is, therefore, open to scrutiny, and the algorithms will make at least some errors when labelling these types of images (Crawford & Paglen).

This idea also contains a number of other assumptions as well, including the idea that the concepts contained in ‘emotions’ can be connected to photos of people’s faces (because there are six emotions and a neutral state) (Crawford & Palgen). Based upon this assumption there is a fixed relationship between a person’s facial expression and their actual interior state, meaning that the algorithms are trained to learn a correlation that is steady, quantifiable, and consistent across all photos it is given (Crawford & Paglen).

Another harmful assumption that is made about the relationship between pictures and concepts is the idea that something about a person’s basic character can be seen by analysing their facial and bodily features (Crawford & Paglen). ImageNet does this by assuming a person can be categorised by assessing their photo. Fig. 11, 12, and 13 are respectively labelled as a “loser,” “kleptomaniac” and “snob,” despite the people in the photo showing no indication of ‘being’ such ‘things’ (Crawford & Paglen).

Figure 11. Kate Crawford and Trevor Paglen. “Loser.” Excavating AI, 19 Sep. 2019. https://excavating.ai
Figure 12. Kate Crawford and Trevor Paglen. “Kleptomaniac of categories.” Excavating AI, 19 Sep. 2019. https://excavating.ai
Figure 13. Kate Crawford and Trevor Paglen. “Snob.” Excavating AI, 19 Sep. 2019. https://excavating.ai

Additionally, Princeton University researchers analysed and correlated 2.2 million words using ‘off-the-shelf’ machine learning AI software (Turner Lee, Resnick, & Barton). The words ‘woman’ and ‘girl’ were found to be more frequently associated with the arts than with science and math, while these words were most likely to be associated with men (Turner Lee, Resnick, & Barton). European names were also viewed as more ‘pleasant’ than African-American names (Turner Lee, Resnick, & Barton).

By analysing word associations within the training data, it is clear to see that the machine learning algorithm adopted racial and gender biases that it was shown by humans (Turner Lee, Resnick, & Barton). If these biases present in the algorithms were used as part of a search engine ranking algorithm or in an auto- complete tool, this could reinforce racial and gender biases over time (Turner Lee, Resnick, & Barton).

It is clear that training sets of machine learning used to train articulate intelligence systems contain biases (Crawford & Paglen). Without properly understanding how these biases come to be and how they influence decisions made by AI, harmful assumptions continue to be perpetuated and can cause allocations of harm (Crawford & Paglen).

Works Cited
Alaimo, Cristina, & Kallinikos, Jannis. “Computing the everyday: Social media as data platforms.” The Information Society, vol. 33, no. 4, 2017, pp. 175-191, Taylor & Francis Online, doi: 10.1080/01972243.2017.1318327

Bucher, Taina. “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms.” Information, Communication, and Society, vol. 20, no. 1, 2017, Taylor & Francis Online, doi: 10.1080/1369118X.2016.1154086

Carah, Nicholas. COMU3110 Digital Platforms Seminars 1-6. 2022, University of Queensland, Saint Lucia. Class lecture.

Cooper, Paige. “How the YouTube Algorithm Works in 2022: The Complete Guide.” Hootsuite, MLA, 21 Jun. 2021, https://blog.hootsuite.com/how-the-youtube-algorithm-works/

Crawford, Kate, & Joler, Vladan. “The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources.” AI Now Institute and Share Lab, MLA, 7 Sep. 2018, https://anatomyof.ai/

Crawford, Kate, & Paglen, Trevor. “Excavating AI: The Politics of Training Sets of Machine Learning.” Excavating AI, MLA, 19 Sep. 2019, https://excavating.ai/

GRIN. “CGI Influencers: What Are They and How to Work With Them.” GRIN, MLA, 22 May. 2022, https://grin.co/blog/cgi-influencers/

Hicks, Mar. “Computer Love: Replicating Social Order Through Early Computer Dating Systems.” Gender New Media and Technology. Vol. 10, 3016, doi: 10.7264/N3NP22QR

Turner Lee, Nicol., Resnick, Paul., & Barton, Genie. “Algorithmic bias detection and mitigation: best practices and policies to reduce consumer harms.”

Brookings, MLA, 22 May. 2019, https://www.brookings.edu/research/algorithmic-bias-detection-and mitigation-best-practices-and-policies-to-reduce-consumer-harms/#footnote-11

Advertisement

Author: thespookyredhead

Come for the pop culture. Stay for the bad grammar.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: