Deliverable D7.3 (Final version with fixes from the user evaluation feedback)
 
SONAAR - Social Networks Accessible Authoring

Mobile service for authoring of accessible content

Deliverable D7.3 (Final version with fixes from the user evaluation feedback)

Document Technical Details

Document Number: D7.3
Document title: Mobile service for authoring of accessible content
Version: 1.0
Document status: Final version
Work package/task: WP3/Task 3.3
Delivery type: Software prototype
Due date of deliverable: July 30, 2021
Actual date of submission: July 30, 2021
Confidentiality: Public

Document History

Version Date Status Author Description
0.1 23/07/2021 Draft Letícia Pereira First draft
0.2 24/07/2021 Draft Carlos Duarte Final draft
0.5 28/07/2021 Draft André Rodrigues Review
0.6 28/07/2021 Draft José Coelho Review
0.7 29/07/2021 Draft Letícia Pereira Review
1.0 30/07/2021 Final Carlos Duarte Final version

Contents

Introduction

SONAAR aims to facilitate the user-generation of accessible content on social network services by developing a solution that supports the authoring and consumption of media content on social platforms in both desktop and mobile devices. In addition to improving the accessibility of this content, the proposed solution has also the potential to raise awareness to the importance of authoring accessible content by engaging users in accessible authoring practices.

This deliverable concerns work packages 1 and 3 of the SONAAR project. In WP1, the work focused on extending the features reported in D7.1 and D7.2. In particular, updating the support for Facebook and Twitter interface changes, implementation of a request log, and deploying the description's quality assessment algorithm. In WP3, the work focused on dissemination strategies to better investigate motivational factors for engaging users in the production of accessible content.

This document is structured as follows: The following section describes the new functionalities deployed in the current version of the prototype, including the required updates on the backend service. Additionally, that section describes the final implemented workflows. The next section describes the current user study to collect feedback on the effectiveness of the documentation. The final section explains how the SONAAR Android application can be installed.

Functionality updates

In this section we describe the new features integrated in the last version of the Android application in order to enhance the usability of SONAAR and to support approaches to improve the suggestion of alternative descriptions.

As previously reported in D7.2, the suggestions offered by SONAAR include three different sources: descriptions provided by users, image concepts identified by Clarifai, and text recognised in the image. Although in this version the sources remain the same, some improvements were made to the presentation and ordering of the suggestions taking into account different factors. First, the algorithm to measure the quality of the descriptions provided for a specific image was put in place. Combined with the amount of times each description has been used, this is the main base for the answers provided by our backend service and it will be further explored in the next section. We also anticipate that, with a more extensive use of SONAAR, some popular images may have a fair number of descriptions stored in our database. In order not to overload users, we limited the list provided to the user to the first 5 descriptions available on our database, sorted by our quality measure algorithm and the amount of times this description was used. Finally, as two of our description sources are automatically generated, i.e. image concepts and text recognition, we included a message indicating the description was generated by SONAAR. With this update to the message, the user is aware the description comes from an automatic source. In addition, it might increase SONAAR dissemination through other users consuming this description.

Updates to the backend service

As described in previous deliverables, in particular D5, D6.2 and D7.2, the backend service is composed of a database that stores image descriptions previously provided by SONAAR users. This information is also linked to an image identifier, the language the description is written in, the number of times the description has been used, and, now, a unique user ID linking to a request log, and a quality measure rating to assist in sorting the list of suggestions sent back to the client.

The request log stores, in addition to the unique user ID, whether the request came from the mobile or extension client, from an authoring or a consumption scenario, from Twitter or Facebook interface, and if a new description was stored. With this information we will be able to better understand how people are using SONAAR and also to further explore users' preferences on image descriptions in order to better guide future efforts and research on this topic. That discussion will be conducted on future deliverables (D2 and D10). The user ID is not linked to any username or account. It does not allow us to identify the user in any way. It only allow us to know what requests have originated from the same device.

Furthermore, our quality measure returns a metric of similarity between the terms in the image description and several features in the image, such as concepts present in the image as well as different synonyms for each one of them. The algorithm implemented applies the same logic as the one used in our previous work, i.e. extracting image features to compare with a text. As a result, we obtain a semantic measure between an image and a description on a scale of 0 to 1. Some of the libraries originally used are not currently available and, for that, in what follows, we will briefly describe how this semantic analysis is implemented. This algorithm is integrated in the SONAAR prototypes and available in Annex I as well as in the project open repository at https://github.com/SONAARProject/backend/blob/476ce883b4d2fb47a4935f83b917fabfa2059995/src/lib/quality.ts

The first image feature extracted is the list of concepts provided by Clarifai. This list is then used to calculate two main aspects of this analysis. The first one is the number of concepts or its synonyms included in the description. To collect synonyms we used a service provided by the DataMuse API (https://www.datamuse.com/api/). Following that, we use the Dandelion API (https://dandelion.eu/) to obtain a score of the similarity between the whole list of concepts and the given description. Considering the SONAAR context, before calculating the final quality measure, we decided to also integrate a penalty score. This penalty is applied in case the description corresponds to the list of concepts obtained from that image provided by SONAAR. We use the string “The image may contain…” to identify such cases. With that, we ensure descriptions provided by users themselves have a higher quality rating than automatic ones. However, users will still be able to receive these suggestions in case no user has previously provided a description for that image. This penalty is also applied, with a lower level, when all concepts are present in the description, implying that the given description was also generated by an automatic source or that the user did not invest effort in adapting the automated suggestion to the context of the publication. Following that, we calculate the final quality measure according to the following equation:

quality = similarity - penalty + (concepts present in the description + synonyms present in the description) / total number of concepts

As a result, we have a semantic similarity score between the image and a description on a scale of 0 to 1. In order to evaluate the new version of this algorithm, we gathered a total of 40 images from the web. These images were collected from resources related to accessibility good practices, such as the WebAIM article guide on how to provide an adequate alternative text for images (https://webaim.org/techniques/alttext/). The full list of resources is presented in Annex II. We collected, for each image, the examples of bad and good alternative descriptions provided on those resources. For each image we collected from two to four different descriptions. Our algorithm was used to assess the quality of each description of each image. For each image we ordered the descriptions according to the quality measure and compared it with the ordering suggested on the resource from where we collected the image. Our results show that our algorithm was able to exactly match the order suggested on the resource for 52.5% of the images. For 25% of the images, our algorithm changed the order of only two descriptions. For the remainder 22.5%, our algorithm ordered the description with more than one change to the order provided in the resource. The images where the algorithm's order resulted in more changes to the resource ordering are images with descriptions that identify specific persons or sites by name. Given that the concept identification mechanism from Clarifai does not identify specific persons or sites, the semantic similarity matching between “person” (concept reported by Clarifai) and “name of person” (text in the alternative description) is low, which prompts the algorithm to order those descriptions below other descriptions that do not identify a specific person or site.

Next, this information is combined with the amount of times a description has been used, to sort the list of descriptions of the same language of the users' device or browser. The percentage of times that description has been used for that image (value between 0 and 1) plus the quality of the description for that image (value between 0 and 1), divided by two, is the sorting value for each description of an image.

sortingValue = ( (times description used / times all descriptions for this image were used) + quality ) / 2

The top five descriptions after sorting are sent back to the client.

Final workflows

In this section, we describe the set of workflows implemented and updated to support the new features deployed.

Implemented backend workflows

The first workflow established for the backend service defines the descriptions that will be sent to our front end clients when receiving a request. In this workflow, the request includes the language identified by the front end client. With that information, we limit our selection to descriptions in the same language. The order of this list is now defined by the usage count of each one of the sources available, as well as the quality measure, as described in the previous section.

  • Answering a request - Default language:
    • Search for a previous entry for this image using Clarifai image recognition
    • When no other instance of this image is identified in the database:
      • Store the image identifier
      • Store the image concepts identified by Clarifai
      • Store any text that has been recognized by Clarifai in the image
      • Return a list composed by the concepts and recognized text
    • When an instance of this image is identified on the database:
      • Search alternative descriptions previously provided by other SONAAR users for the same image in the same language
      • Search for concept list provided by Clarifai for the same image
      • Search any recognized text content in the image
      • Return an ordered list of descriptions, concepts and text

The following workflow defines the steps established when the backend receives an image from the front end client that will now also store the quality measure of this image.

  • Receiving an entry
    • Search for a previous entry for this image using Clarifai image recognition
      • Search for the description in the list of stored descriptions of that image
      • When the description has already been used for the image
        • Increment the counter of times this description was used
      • When the description has not been used before for the image
        • Store the description provided
        • Determine and store the language the description is written in
        • Initialize the counter of times this description was used
        • Determine and store the quality measure

Implemented workflow for supporting accessible content on social networks

As previously described in D7.2, three different workflows were implemented for supporting the authoring of accessible content on social networks, differing on how the suggestions are presented to the user. In the first scenario, the user is presented with the first ranked description and an option to ask for more results. In an extension of this scenario, the user is presented with the complete list of descriptions provided by the backend service. In the last scenario, the user is already presented with the full list of descriptions. In all cases, the user has then the option to copy the suggestion chosen to the clipboard and is presented with an indication of where to paste the description to complete the authoring flow.

The main workflow implemented on the prototype available for download benefits from user notifications to easily present to the user the first description provided by our backend service, according to the ordering criteria previously described, followed by an option to present the complete list of descriptions:

  • List of results
    • User selects the media upload button Compose new Tweet window.

    • User selects the image to be uploaded

    • SONAAR queries the backend service

    • SONAAR opens a notification containing:

      • A message informing that a description was found for that image
      • The description suggested for that image
      • A message informing the user where to include the description in the post or in the tweet
      • A button allowing the user to copy the description to the clipboard
      • A button allowing user to ask for more results Notification drawer expanded view containing a SONAAR notification with the message SONAAR detected that you are posting an image to Twitter. A possible alt Text to your image is: a pug wrapped in a blanket sitting on a bed. Please consider adding it to your image post. To do so click on the +Alt button. The message is followed by two buttons: Copy to the clipboard and See more.
    • User selects the option to copy the description to the clipboard

    • User pastes the description into the corresponding input box

    • User may edit the description Twitter window to write alt text presenting the image uploaded followed by the description input box containing the text a pug wrapped in a blanket sitting on a bed: alt text by sonaar.

    • User confirms the upload of that image

    • SONAAR logs the information provided by the user

In an extension of this workflow, the user decides to ask for the complete list of descriptions, in order to complement the information provided.

  • Ask for more results
    • User selects the media upload button Compose new Tweet window.
    • User selects the image to be uploaded
    • SONAAR queries the backend service
    • SONAAR opens a notification containing:
      • A message informing that a description was found for that image
      • The description suggested for that image
      • A message informing the user where to include the description in the post or in the tweet
      • A button allowing the user to copy the description to the clipboard
      • A button allowing user to ask for more results Notification drawer expanded view containing a SONAAR notification with the message SONAAR detected that you are posting an image to Twitter. A possible alt Text to your image is: a pug wrapped in a blanket sitting on a bed. Please consider adding it to your image post. To do so click on the +Alt button. The message is followed by two buttons: Copy to the clipboard and“See more.
    • User selects the option to ask for more results
    • SONAAR opens a window containing:
      • A message asking the user to select one of the descriptions to be copied to the clipboard
      • A list of the complete list of descriptions identified for that image Compose new Tweet window with an overlay window generated by SONAAR with the following message: Click on one item to copy it to the clipboard. Next, a list is presented with the following items: a pug wrapped in a blanket sitting on a bed, dog, sleep, and cute. At the bottom a Close button is also displayed.
    • User selects one description to copy to the clipboard
    • SONAAR closes the window
    • User pastes the description into the corresponding input box
    • User may edit the description Twitter window to write alt text presenting the image uploaded followed by the description input box containing the text a pug wrapped in a blanket sitting on a bed: alt text by sonaar.
    • User confirms the upload of that image
    • SONAAR logs the information provided by the user

Collecting user feedback

In order to collect input on the resources provided by SONAAR, the prototypes developed and the documentation for authoring of accessible content, we are currently conducting two studies with social network users, guided by two main objectives: validate with users the effectiveness of the documentation for authoring accessible content, and validate with users the new interaction flow for accessible content authoring.

The first study concerns both of these objectives. Participants can belong to different profiles, i.e. blind and sighted users, with the only requirement being a Twitter or Facebook user. Participants are asked to fill out a form containing questions about their social network usage and accessible practices in this context and then asked to use SONAAR in their daily routine, on the social network they normally use. After a period of two weeks, a new form is sent containing questions about their experience using SONAAR and an invitation to an interview. A call for participation was largely disseminated through our research team social media and disability related communities. With this dissemination, we also expect users downloading and trying out SONAAR. In that case, the request log integrated into our backend service helps us to get more information about the usage of our prototypes.

The second study is an open survey on the documentation for the authoring of accessible content provided. For that study, participants are required to read the information available and answer questions about the clarity, comprehensiveness and usefulness of it. In addition, we also ask participants if they learned some new information and if this documentation may engage more users into accessible practices. Finally, participants are invited to share any comments on suggestions on how to improve the documentation or other thoughts in general.

The results of both studies will guide future improvements to the documentation and will also be reported and further discussed in the next deliverables (D2 and D10).

Setup Instructions

The SONAAR mobile service was developed and tested on a Google Pixel 2 running Android 11. In order to work correctly, the service must be run on an Android device running at least Android 9 and with the language set to English or Portuguese.

The current version of SONAAR is available for download on the Google Play store at: https://play.google.com/store/apps/details?id=pt.fcul.lasige.sonaar

The mobile service can also be manually installed:

  1. Download the apk file from https://github.com/SONAARProject/mobile-client
  2. Enable "install from unknown sources" on the app you start the installation from
  3. Open the app, accept the storage permission and enable the accessibility service
  4. The application is constantly being updated with new features developed during the project

Annex I: Quality measure algorithm


import fetch from "node-fetch";
import { getImageConcepts } from "./database";
import { readFileSync } from "fs";

const token = readFileSync("../dandelion.key", "utf-8").trim();

async function calcAltTextQualityForImage(clarifaiId: string, altText: string) {
    const imageConcepts = await getImageConcepts(clarifaiId);

    let totalConcepts = 0;
    let conceptsInAlt = 0;
    let synonymsInAlt = 0;
    let conceptString = "";
    for (const concept of imageConcepts) {
        conceptString += concept;
        conceptString += " ";
        totalConcepts++;
        if (isConceptInAlt(concept, altText)) {
            conceptsInAlt++;
        } else if (await isSynonymInAlt(concept, altText)) {
            synonymsInAlt++;
        }
    }
    const similarity = await getSimilarity(conceptString, altText);
    let penalty = 0;
    if (conceptsInAlt === totalConcepts) {
        penalty += 0.5;
    }
    if (altText.includes("The image may contain")) {
        penalty += 1;
    }
    const quality = similarity - penalty + (conceptsInAlt + synonymsInAlt) / totalConcepts;
    return quality;
}


function isConceptInAlt(concept: string, alt: string): boolean {
    const description = alt.toLowerCase();
    let found = description.includes(concept);
    if (!found) {
        found = checkPlurals(concept, description);
    }
    return found;
}

async function isSynonymInAlt(concept: string, alt: string): Promise<boolean> {
    const description = alt.toLowerCase();
    const synonymList = await getSynonyms(concept);
    for (const synonym of synonymList) {
        if (description.includes(synonym)) {
            return true;
        }
    }
    return false;
}

function checkPlurals(word: string, sentence: string): boolean {
    const plural = getPlural(word);
    return sentence.includes(plural);
}

function getPlural(word: string): string {
    if (word.length < 3) {
        return word;
    }
    const vowels = ['a', 'e', 'i', 'o', 'u'];
    const termination11 = ['s', 'x', 'z'];
    const termination12 = ['ss', 'ch', 'sh'];
    const termination13 = 'o';
    const termination14 = ['calf', 'half', 'leaf', 'loaf', 'self', 'sheaf', 'shelf', 'thief', 'wolf'];
    const termination15 = ['wife', 'life', 'knife']
    const termination16 = ['ief', 'oof', 'eef', 'ff', 'rf'];
    const lastChar = word.substr(word.length - 1, 1);
    const lastTwoChar = word.substr(word.length - 2, 2);
    const lastThreeChar = word.substr(word.length - 3, 3);
    const irregular = ['goose', 'foot', 'mouse', 'woman', 'louse', 'man', 'tooth', 'die', 'child', 'ox'];

    if (irregular.includes(word)) {
        return getIrregularPlural(word);
    } else if (termination11.includes(lastChar) || termination12.includes(lastTwoChar)) {
        return word + 'es';
    } else if (lastChar === termination13) {
        const beforeLastChar = word.substr(word.length - 2, 1);
        if (vowels.includes(beforeLastChar)) {
            return word + 's';
        } else {
            return word + 'es';
        }
    } else if (lastChar === 'y') {
        const beforeLastChar = word.substr(word.length - 2, 1);
        if (vowels.includes(beforeLastChar)) {
            return word + 's';
        } else {
            return word.substr(0, word.length - 1) + 'ies';
        }
    } else if (termination14.includes(word)) {
        return word.substr(0, word.length - 1) + 'ves';
    } else if (termination15.includes(word)) {
        return word.substr(0, word.length - 2) + 'ves';
    } else if (termination16.includes(lastTwoChar) || termination16.includes(lastThreeChar)) {
        return word + 's';
    } else {
        return word + 's';
    }
}

function getIrregularPlural(word: string): string {
    switch (word) {
        case 'goose': return 'geese';
        case 'foot': return 'feet';
        case 'mouse': return 'mice';
        case 'woman': return 'women';
        case 'louse': return 'lice';
        case 'man': return 'men';
        case 'tooth': return 'teeth';
        case 'die': return 'dice';
        case 'child': return 'children';
        case 'ox': return 'oxen';
    }
    return '';
}

async function getSynonyms(word: string): Promise<Array<string>> {
    let synonyms: string[] = [];
    const url = 'https://api.datamuse.com/words?ml=' + encodeURIComponent(word);
    const response: any = await (await fetch(url)).json();
    for (const entry of response) {
        if (entry.tags && entry.tags.includes("syn")) {
            synonyms.push(entry.word);
        }
    }
    return synonyms;
}

async function getSimilarity(text1: string, text2: string): Promise<number> {
    const url = 'https://api.dandelion.eu/datatxt/sim/v1/?text1=' + encodeURIComponent(text1) + '&text2=' + encodeURIComponent(text2) + '&lang=en&token=' + encodeURIComponent(token);
    const response: any = await (await fetch(url)).json();
    return response.similarity;
}

                

Annex II: List of sources and images used on the quality measure algorithm

Image Link Page Link
https://moz-static.s3.amazonaws.com/learn/seo/Alt-Tag-page/alt-tag-image-3.png?mtime=20170315125707 https://moz.com/learn/seo/alt-text
https://moz-static.s3.amazonaws.com/learn/seo/Alt-Tag-page/alt-tag-image-5.png?mtime=20170315125709 https://moz.com/learn/seo/alt-text
https://moz-static.s3.amazonaws.com/learn/seo/Alt-Tag-page/alt-tag-image-6.png?mtime=20170315125710 https://moz.com/learn/seo/alt-text
https://moz-static.s3.amazonaws.com/learn/seo/Alt-Tag-page/alt-tag-image-7.png?mtime=20170315125705 https://moz.com/learn/seo/alt-text
https://static.hwpi.harvard.edu/files/styles/os_files_xlarge/public/online-accessibility-huit/files/032012_stadiumsteps_131_223861_01.jpg?m=1585245978&itok=IRvkxP3N https://accessibility.huit.harvard.edu/describe-content-images
https://webaim.org/techniques/alttext/media/gw.jpg https://webaim.org/techniques/alttext/
https://webaim.org/techniques/alttext/media/gw2.jpg https://webaim.org/techniques/alttext/
https://d1mdce1aauxocd.cloudfront.net/_imager/files/Example-Images/Landscape/36/bear_e15e6a1edfd4a0a52e6105faef38c211.jpg https://supercooldesign.co.uk/blog/how-to-write-good-alt-text
https://blog.hubspot.com/hs-fs/hubfs/david-ortiz-fenway.jpg?width=500&name=david-ortiz-fenway.jpg https://blog.hubspot.com/marketing/image-alt-text?toc-variant-b=
https://blog.hubspot.com/hs-fs/hubfs/women-on-computer.jpg?width=500&name=women-on-computer.jpg https://blog.hubspot.com/marketing/image-alt-text?toc-variant-b=
https://www.med.unc.edu/webguide/wp-content/uploads/sites/419/2021/01/UNC-Medical-Center-600x384.jpg https://www.med.unc.edu/webguide/accessibility/alt-text/
https://www.med.unc.edu/webguide/wp-content/uploads/sites/419/2021/01/Old-Well-Spring-600x399.jpg https://www.med.unc.edu/webguide/accessibility/alt-text/
https://www.reliablesoft.net/wp-content/uploads/2019/06/brownies.jpg https://www.reliablesoft.net/alt-text/
https://www.reliablesoft.net/wp-content/uploads/2019/06/beagle-dog-300x200.jpg https://www.reliablesoft.net/alt-text/
https://cliquestudios.com/wp-content/uploads/2018/07/pug-in-a-blanket-1200x800.jpg https://cliquestudios.com/alt-text/
https://cliquestudios.com/wp-content/uploads/2018/07/neon-sign-eat-what-makes-you-happy.jpg https://cliquestudios.com/alt-text/
https://ahrefs.com/blog/wp-content/uploads/2020/03/cheesecake.png https://ahrefs.com/blog/alt-text/
https://ahrefs.com/blog/wp-content/uploads/2020/03/amp.png https://ahrefs.com/blog/alt-text/
https://case.edu/accessibility/sites/case.edu.accessibility/files/styles/large/public/2018-04/170719%20CWRU%20Students-368HighRes.jpg?itok=tLUnkyrG https://case.edu/accessibility/what-accessibility/guidelines/alternative-text
https://www.txstate.edu/cache2450e4c6315fa958955a373ad60ddba0/imagehandler/scaler/gato-docs.its.txstate.edu/jcr:b8feda94-6d3d-4402-8f7d-d054757df94d/_DSC4814.JPG?mode=fit&width=1504 https://doit.txstate.edu/accessibility/user-guides/images-alt-text.html
https://accessibility.umn.edu/sites/accessibility.umn.edu/files/styles/folwell_third/public/2020-01/goldymascotsports_image.jpg?itok=3ZMbqjCD https://accessibility.umn.edu/what-you-can-do/start-7-core-skills/alternative-text
https://mk0imageseoidwihge0j.kinstacdn.com/wp-content/uploads/2019/12/Notre-dame-Paris-768x496.png https://imageseo.io/alt-text-seo/
https://mk0imageseoidwihge0j.kinstacdn.com/wp-content/uploads/2019/11/Kangaroo.jpg https://imageseo.io/alt-text-seo/
https://mk0imageseoidwihge0j.kinstacdn.com/wp-content/uploads/2019/06/white-rose-flower-1024x640.jpg https://imageseo.io/alt-text-seo/
https://mk0imageseoidwihge0j.kinstacdn.com/wp-content/uploads/2019/12/tesla-car.jpeg https://imageseo.io/alt-text-seo/
https://www.commonplaces.com/hs-fs/hub/203683/file-3007897121-jpg/blog-files/grey-and-white-rabbit.jpg?width=360&height=225&name=grey-and-white-rabbit.jpg https://www.commonplaces.com/blog/writing-alt-tags-for-images/
https://libapps.s3.amazonaws.com/accounts/215894/images/Google_Chrome_icon.png https://libguides.utk.edu/c.php?g=974111&p=7356242
https://teaching.pitt.edu/wp-content/uploads/2018/12/accessibility-bird-alt-text-example.png https://teaching.pitt.edu/accessibility/recommendations/alternative-text/
https://www.3playmedia.com/wp-content/uploads/SyrioatChristmas.png https://www.3playmedia.com/blog/alt-text-marketing/
https://www.innovationvisual.com/hs-fs/hubfs/Picture1-1-300x193.png?width=319&name=Picture1-1-300x193.png https://www.innovationvisual.com/knowledge/why-image-alt-text-is-important-for-seo
https://www.innovationvisual.com/hs-fs/hubfs/Picture2-1-300x200.png?width=320&name=Picture2-1-300x200.png https://www.innovationvisual.com/knowledge/why-image-alt-text-is-important-for-seo
https://www.innovationvisual.com/hs-fs/hubfs/Picture3-300x224.png?width=328&name=Picture3-300x224.png https://www.innovationvisual.com/knowledge/why-image-alt-text-is-important-for-seo
https://library.gwu.edu/sites/default/files/content-editing/pasted%20image%200%283%29.png https://library.gwu.edu/content/alt-text
https://library.gwu.edu/sites/default/files/content-editing/kiev.jpg https://library.gwu.edu/content/alt-text
https://library.gwu.edu/sites/default/files/content-editing/CUFR-large-brand.jpg https://library.gwu.edu/content/alt-text
https://bighack.org/wp-content/uploads/2019/10/Golden-Retriever-alt-text.jpg https://bighack.org/how-to-write-better-alt-text-descriptions-for-accessibility/
https://bighack.org/wp-content/uploads/2019/10/Fish-and-chips.jpg https://bighack.org/how-to-write-better-alt-text-descriptions-for-accessibility/
https://bighack.org/wp-content/uploads/2019/10/800px-Anne_Boleyn_London_Tower-768x1026.jpg https://bighack.org/how-to-write-better-alt-text-descriptions-for-accessibility/
https://www.bluleadz.com/hs-fs/hubfs/Blog_pics/alt-text-example_294303236.jpeg?width=550&name=alt-text-example_294303236.jpeg https://www.bluleadz.com/blog/youre-using-alt-text-wrong-and-how-to-fix-that
https://www.bluleadz.com/hs-fs/hubfs/Blog_pics/alt-text-example-2.png?width=550&name=alt-text-example-2.png https://www.bluleadz.com/blog/youre-using-alt-text-wrong-and-how-to-fix-that