Browser extension for authoring of accessible content
 

Browser extension for authoring of accessible content

Deliverable D6.2 (Update the authoring mechanisms and inclusion of authoring documentation)

Document Technical Details

Document Number: D6.2
Document title: Browser extension for authoring of accessible content
Version: 1.0
Document status: Final version
Work package/task: WP3/Task 3.2
Delivery type: Software prototype
Due date of deliverable: April 30, 2021
Actual date of submission: April 30, 2021
Confidentiality: Public

Document History

Version Date Status Author Description
0.1 24/04/2021 Draft Carlos Duarte First draft
0.2 28/04/2021 Draft Letícia Pereira Final draft
0.3 28/04/2021 Draft Carlos Duarte Review
0.4 29/04/2021 Draft André Rodrigues Review
0.5 29/04/2021 Draft Carlos Duarte Review
1.0 30/04/2021 Final Carlos Duarte Final version

Contents

Introduction

SONAAR aims to facilitate the user-generation of accessible content on social network services by developing a solution that supports the authoring and consumption of media content on social platforms in both desktop and mobile devices. In addition to improving the accessibility of this content, the proposed solution has also the potential to raise awareness to the importance of authoring accessible content by engaging users in accessible authoring practices.

This deliverable concerns work packages 1 and 3 of the SONAAR project. In WP1, the work focused on extending the features reported in D6.1. In particular, extending the support to Facebook in addition to Twitter, adding further image description sources, and starting to explore the description quality assessing algorithm. In WP3, the work focused on endowing the prototype with documentation that motivates users to the needs and advantages of accessible content and that supports them in authoring content in an accessible manner.

This document is structured as follows: The following section describes the functionalities deployed in the current version of the prototype. It also presents the updates to the backend service that were required to support the updated functionalities. Additionally, this section introduces a set of workflows illustrating potential usage scenarios of the SONAAR prototype, either currently supported or that might be supported in the future. The next section describes the documentation that has been included in the current version of the prototype. The following section explains how the SONAAR Google Chrome web extension can be installed. The final section presents the next steps in what concerns the browser extension for the final period of the SONAAR project.

Functionalities description

In this section we describe the deployed features of the Google Chrome web extension in order to support SONAAR users in authoring accessible content. Our development was guided by the findings of the user study conducted and reported in Deliverables 1 and 4, tackling two of the main reasons for social network users not authoring accessible media content: the lack of knowledge about accessibility and the extra effort it requires. By suggesting a textual description for an image in a post or in a tweet, the SONAAR prototype raises awareness of the need for accessible authoring practices, and makes it easier for the content author to include a description.

The prototype is capable of automatically detecting when the social network user is authoring content with images on Twitter or Facebook. The current version of the Chrome extension achieves this by inspecting the DOM of the web page and looking for the presence of elements with specific class attributes. After detecting that a user has uploaded an image in the authoring page, a request is sent to the backend containing the image and the language of the user's browser. This automated authoring detection process has some limitations: it is dependent on Twitter and Facebook not changing their user interface.

When the backend answers with the suggested description for the image, the prototype makes the user aware of it. This description is presented as an overlay window next to the field where the description is to be entered, as shown in the following image.

Twitter's media editing page containing a window next to it with a message indicating the potential suggestions indicated and a button to copy them to the clipboard

The user can copy the description to the clipboard and paste it into the corresponding field in the authoring interface, also indicated by another overlay window. If the backend sends more than one description, we offer the user the chance to see the extra descriptions. By selecting that option, the list of descriptions is presented to the user, and any can be selected. Further information on how this list is defined is discussed in the next section. Different interfaces to present this content are further discussed in the Workflows section.

Finally, the prototype is able to automatically detect when the tweet or post is completed (i.e. the user activates the corresponding button on the interface). In this moment, the image's description is captured and sent to the backend, where it is stored as a new description (if the user created a new one or changes anything in one of the suggested descriptions) or, if it is not a new one, the number of times the description was used is incremented.

Updates to the backend service

In this section we describe the current structure of the backend service including the updates made in order to cope with the new resources in the Chrome extension.

The backend contains a database that stores images descriptions previously provided by SONAAR users. This database connects an image identifier with the currently known descriptions for that image and, now, the language the description is written in. In order to provide these descriptions upon a client's request, the backend needs to be able to search for an image in the database.

Image searching is achieved through the image recognition service provided by Clarifai. We store images in Clarifai and use their image searching feature to search the image database for the image for which the description has been requested. Clarifai provides a similarity measure between the searched image and every image in the database. When an image has a similarity measure above the "same image threshold" we consider it to be the same image we have on the database. We have fine-tuned the "same image threshold" so that the same image is identified even if it has suffered small modifications, like a small crop, or the addition of a watermark or signature. The process to define the value for this threshold consisted in modifying several images in different ways (e.g. different amount of crop, different degrees of rotation, or inserting differently sized and colored watermarks) and observing the changes in the similarity measure returned by Clarifai when compared with the unmodified image. From the observations we empirically determined the value associated with changes that we classified as resulting in an image that should be considered different.

To assist in the preparation of image descriptions we expanded the use of features provided by Clarifai. The first one is the ability to provide a list of concepts related to the image that is searched. Clarifai provides us with a list of concepts, with, for each concept, a level of confidence in the accuracy of the result. We keep the concepts from this list that are above a "concept confidence threshold". These concepts offer us another way to create an image description. This threshold was also empirically defined through an analysis of the concepts generated by Clarifai for multiple images. The threshold captures what we considered the most relevant concepts without limiting the number of concepts returned. A second feature we use from Clarifai is the ability to recognize text present in the image. This is particularly relevant for the social network domain, where many posted images contain text (e.g. memes). The image's text is, very often, another source for creating an image description.

A final source of image descriptions are the ones created by the users of social networks themselves. When our front end prototypes detect an image being posted with a description, that description is sent to the backend. The backend stores that description if it has not been stored before. If that description had already been stored, we now increase a counter of the number of times it has been used. The language the description is written in is also stored. We use the Franc Natural Language Detection library to identify the language of the description. If it is not possible to detect the language from the description (e.g. because it does not have enough words) we apply the same procedure to the language of the tweet or post. If this also does not return a result, we determine the language from the language of the user’s browser. In this way we try to accommodate those instances where a user has the browser set in one language but might write tweets or posts in multiple languages.

In summary, our current sources for descriptions include: descriptions provided by users, image concepts identified by Clarifai, and any text in the image. Image descriptions are characterized by a language and by the number of times they have been used in tweets or posts.

In order to answer the client's request, the backend will have to decide which description or descriptions to send. To make that decision, we currently explore two features. The client's request includes the language of the user's browser. With that information we can limit our selection to descriptions in the same language. The second feature is the amount of times a description has been used. The backend uses this additional information to order the list of descriptions of the same language of the users' browser that are sent back to the client.

Workflows

In this section we present a set of workflows exploring different approaches to engage users in authoring accessible media content in their social networks. In order to assess the effectiveness of the workflows that have been implemented, we prepared different versions of the Chrome extension and will distribute them to selected groups of end-users during the evaluation phase. We also present some suggested workflows that could be deployed in our prototypes in future improvements. All the workflows were created to guide the users during the authoring process, with messages indicating the next steps to be taken.

Implemented backend workflows

The first workflow established for the backend service defines the sources that will be sent when queried for an image description. In this workflow, the client’s request includes the language identified by the web extension With that information we can limit our selection to descriptions in the same language. The order of this list is defined by the usage count of each one of the sources available, as described in the section on the Updates to the backend service.

  • Answering a request - Default language:
    • Search for a previous entry for this image using Clarifai image recognition
    • When no other instance of this image is identified on the database:
      • Store the image identifier
      • Store the image concepts identified by Clarifai
      • Store any text that has been recognized by Clarifai in the image
      • Return a list composed by the concepts and recognized text
    • When an instance of this image is identified on the database:
    • Search alternative descriptions previously provided by other SONAAR users for the same image in the same language
      • Search for concept list provided by Clarifai for the same image
      • Search for OCR mechanism to recognize eventual text content in the image
      • Return an ordered list of descriptions, concepts and text

The following workflow defines the steps established when the backend receives an image from the Chrome web extension.

  • Receiving an entry
    • Search for a previous entry for this image using Clarifai image recognition
      • Search for the description in the list of stored descriptions of that image
      • When the description has already been used for the image
        • Increments the counter of times this description was used
      • When the description has not been used before for the image
        • Store the description provided
        • Determine and store the language the description is written in
        • Initialize the counter of times this description was used

Suggested backend workflows

The current structure could be extended in order to comprise other features. One possible avenue for the SONAAR backend is supporting descriptions in different languages, in addition to the ones currently used. In this scenario, the user could define which languages the descriptions will be sent in.

  • Answering a request - Multi-language support
    • Search for a previous entry for this image using Clarifai image recognition
    • When no other instance of this image is identified on the database:
      • Store the image identifier
      • Store the image concepts identified by Clarifai
      • Store any text that has been recognized by Clarifai in the image
      • Return a list composed by the concepts and recognized text
    • When an instance of this image is identified on the database:
    • Search alternative descriptions previously provided by other SONAAR users for the same image in the languages previously defined
      • Search for concept list provided by Clarifai for the same image
      • Search for OCR mechanism to recognize eventual text content in the image
      • Return an ordered list of descriptions, concepts and text

Another possibility will be to use a natural language processing service to translate descriptions to the user’s language.

  • Answering a request - Translated language:
    • Search for a previous entry for this image using Clarifai image recognition
    • When no other instance of this image is identified on the database:
      • Store the image identifier
      • Store the image concepts identified by Clarifai
      • Translate the concepts to the user’s language if needed
      • Store any text that has been recognized by Clarifai in the image
      • Return a list composed by the concepts and recognized text
    • When an instance of this image is identified on the database:
    • Search alternative descriptions previously provided by other SONAAR users independently of the language
      • If no description is found in the user’s language, translate descriptions to the user’s language
      • Translate the concept list provided by Clarifai to the user’s language if needed
      • Search for OCR mechanism to recognize eventual text content in the image
      • Return an ordered list of translated descriptions, concepts and text

Implemented workflows for supporting accessible content on social networks

The first workflow for authoring accessible content currently supported by our prototype consists in providing users the first description on the list sent by our backend.

  • One result
    • User selects the media upload button
    • User selects the image to be uploaded
    • SONAAR queries the backend service
    • SONAAR creates an overlay containing:
      • A message informing that a description was found for that image
      • The description suggested for that image
      • A button allowing the user to copy the description to the clipboard
      • A button allowing the user to ask for more results
      • A message informing the user where to include the description in the post or in the tweet
    • User selects the option to copy the description to the clipboard
    • User pastes the description into the indicated input box
    • User may edit the description
    • User confirms the upload of that image
    • SONAAR logs the information provided by the user

In an extension of this workflow, the user decides to ask for the complete list of descriptions, in order to complement the information provided.

  • Ask for more results
    • User selects the media upload button
    • User selects the image to be uploaded
    • SONAAR queries the backend service
    • SONAAR creates an overlay containing:
      • A message informing that a description was found for that image
      • The description suggested for that image
      • A button allowing the user to copy the description to the clipboard
      • A button allowing the user to ask for more results
      • A message informing the user where to include the description in the post or in the tweet
    • User selects the option to ask for more results
    • SONAAR opens a window containing:
      • A message asking the user to select one of the descriptions to be copied to the clipboard
      • A list of other descriptions identified for that image
    • User selects one description to copy to the clipboard
    • SONAAR closes the window
    • User pastes the description into the corresponding input box
    • User may edit the description
    • User confirms the upload of that image
    • SONAAR logs the information provided by the user

A different workflow also supported by our prototypes presents the complete list of descriptions from the start.

  • List of results
    • User selects the media upload button
    • User selects the image to be uploaded
    • SONAAR queries the backend service
    • SONAAR opens a window containing:
      • A message asking the user to select one of the descriptions to be copied to the clipboard
      • A list of descriptions identified for that image
      • A message informing the user where to include the description in the post or in the tweet
    • User selects one description to copy to the clipboard
    • SONAAR closes the window
    • User pastes the description into the corresponding input box
    • User may edit the description
    • User confirms the upload of that image
    • SONAAR logs the information provided by the user

Implemented workflow to report a problem

We also established a possible workflow allowing users to report a problem. In this scenario, the user can just alert to a problem with SONAAR or send a message providing more information about the trouble identified.

  • Report a problem
    • User selects the option to report a problem
    • SONAAR opens a window message containing:
      • An non-mandatory input box for a description of the problem
      • A button to send the report
    • User provides the information about the trouble found
    • SONAAR sends a message to the support team

Suggested workflow to report a problem

In order to cope with the frequent changes in the interfaces of major social platforms and the challenges it raises, we suggest an extension to the previous workflow allowing users to contribute with the identification of elements on the interface. For this, the value of required elements on the interface (e.g. upload media button, enter alt text button, alt tex input box) would be dynamically stored. In this scenario, when SONAAR detects that a specific set of values is not currently present on the interface, another attempt can be made with a different set. This could be useful not only to cope with new versions of the interface, but also with different interface themes and possible personalization settings made by the users. This suggested workflow allows users to identify the required elements on their own interfaces and send it back to SONAAR. The next time the user activates SONAAR services, this information would be already available and this new set of values for required elements identified on the interface.

  • Identify required interface elements
    • User selects the option to report a problem
    • SONAAR opens a window message containing:
      • An non-mandatory input box for a description of the problem
      • A button to send the report
      • A button to identify required elements on the interface
    • User selects the option to identify required elements on the interface
    • For each one of the required elements:
      • SONAAR shows a message asking the user to identify the element
      • User selects the corresponding element
    • SONAAR logs the information provided

Documentation included

One of the main goals of WP3 is to engage users in the production of accessible content. From our previous user study described in Deliverable 1 and Deliverable 4, we identified that most social network users are not aware of accessible content means and, when they do, they find no proper guidance to improve the accessibility of their content.

In this context, SONAAR offers support documentation to provide users with more information about accessibility practices in social networks. In order to conduct this task, we first researched existing resources on this topic. We identified that most current documentation is hard to find in the official sources (social networks platforms themselves) and is also extensive making it hard to understand. According to our studies, some social network users consider that accessible practices require additional effort and time. Therefore, this format of documentation is not suitable for this context.

Based on this, the SONAAR documentation to support and engage users in the production of accessible content follows two main concepts:

  1. Use of plain and simple language: We avoid the use of jargon and technical terms so that users who have no prior knowledge of accessibility and technologies can easily understand.
  2. Short and objective texts: The messages and texts provided contain only the essential information to be conveyed, allowing users to quickly go through them.

Our approach consists in two different strategies:

  1. In-context tutorials: Provide a guided authoring process for accessible media content, as previously described in the Workflows section.
  2. Website documentation: Informative guide on how people with disabilities are consuming media content and why to engage in accessible practices. Small sets of the website documentation will be periodically published through our social media accounts to increase its reach.

Website documentation

For the second strategy, the SONAAR website provides documentation with the following structure:

1) About SONAAR

This section contains a brief description of SONAAR and the links to download both the mobile application and web browser extension.

2) Why think about accessibility

This section aims to address one of the main barriers identified in our user study, the lack of knowledge on digital accessibility. For that, we briefly describe why accessibility should also be considered in digital environments and the active role that all users play in this context.

3) How people with visual disabilities are accessing your content on social networks

Another barrier identified was the lack of knowledge on how people with disabilities are interpreting visual content. This barrier leads users to disconsider accessibility strategies as it may be associated with the stigma of people with visual impairments not consuming visual content. In this section we introduce the concepts of screen reader and audio transcription, as well as current approaches to provide textual descriptions, highlighting the difference between automated and human descriptions. With that, we intend to raise awareness on the importance of user engagement in accessibility practices.

4) How to improve your content to provide better access for people with disabilities

This section is intended to address another barrier previously identified, most users are not aware of which steps they have to follow to ensure the accessibility of their content. For that, this section provides users some general topics on how to improve the accessibility of their content on social networks. First, we present some good practices followed by practices to avoid. Finally, we present examples of accessible content, including images and what could be considered as an appropriate alternative description for them.

5) How SONAAR will improve the accessibility of media content

This section briefly describes how the features deployed in SONAAR can help mainstream users to provide accessible content on their social networks. We inform users not only how they can contribute to this context, but also the impacts that the use of SONAAR may have for people with visual impairments. We expect SONAAR users to be more engaged not only in providing textual descriptions for their images, but in the general context of accessibility.

6) How to use SONAAR

This section provides users with information on how to use SONAAR both on Android devices and the Google Chrome browser. Setup instructions are detailed as well as a guide on how to use SONAAR.

7) Other useful references

This section gathers additional information that may be useful, such as the official accessibility support pages of major platforms, or additional documentation that we identified during our research. As described, SONAAR documentation provides short and objective information, however, we encourage users to further explore this topic.

8) Contact us

We invite users to contact us with any questions, and specially to share with us their experience using SONAAR.

The complete documentation is available in Annex I and it will be referenced in the prototypes developed as well as disseminated by SONAAR social media profiles and in different disabilities related communities.

Setup instructions

The web extension was developed and tested on the Chrome browser, but is also supported on chromium based browsers like Edge, Brave, Opera or Vivaldi.

The current version of SONAAR is available for download on the Chrome web store at: https://chrome.google.com/webstore/detail/sonaar-add-alts/fclfledfnfpilnpdhflpbpnboiohbmdl

The web extension can also be manually installed:

  1. Download the code from https://github.com/SONAARProject/add-alt-extension
  2. Update the endpoints.js file to point to your own backend installation
  3. Open the extensions tab on the browser
  4. Enable developer mode
  5. Install the extension by clicking the “Load unpacked” button and selecting the folder where the code is

The extension is constantly being updated with new features developed during the project.

Next steps

One of the next steps concerns conducting a new study with social network users, guided by two main objectives: validate with users the effectiveness of the documentation for authoring accessible content, and validate the new interaction flow for accessible content authoring. The study will be focused on two different groups, allowing us to investigate the individual experiences of participants, the accessibility and usability of our prototypes, but also particular aspects of each one of these groups. The first group is composed of different clusters of one blind participant and at least 3 sighted social media contacts of this participant, in particular those that publish media content that they usually consume. With these settings we will be able to also investigate the general impact that our prototypes may have in the media content consumed by blind users. The second group is composed by other interested participants and no further criteria is required. With the feedback of this group we will be able to further explore the engagement and motivational factors that may be raised by SONAAR resources in a context where people may not have any personal connection with a blind person as an intrinsic motivation. We expect that SONAAR will raise awareness and reduce mainstream users' effort to create accessible content, therefore, promoting accessible practices everywhere, not only on social networks, and lessen the burden on people with disabilities to promote these practices. Having SONAAR being used on a larger scale will also allow us to explore other topics. In addition to the users feedback, the amount of times an alternative description has been used can also contribute to better understanding users' preferences on image descriptions. Also, in this period of time we will be monitoring the frequency of social networks interface changes, in order to assess the real impact on our prototypes. The information gathered will be used to re-evaluate the established workflows, discuss future improvements in our prototypes and general contributions to the context of social media accessibility. The results of this study will be documented for future reference in the final deliverables according to its topics.

Furthermore, we are also exploring further approaches to improve the suggestion of alternative descriptions. One of them is the possibility of identifying related images. For that we can define a "related image threshold", which will allow us to identify images that are related but not the same. This knowledge will be useful for those instances where we haven't seen the image before, by allowing us to still offer a description of a related image. The other feature is fully integrating a quality measure of the description for the image. We are currently storing the amount of times a description has been used. Heuristically, we can expect the most popular description to be the most adequate description for an image. However, if we apply this number without further consideration, we might disregard newer descriptions that might be better, but, since they are new, they will have a lower number of users. Our quality measure applies the algorithm described in our previous work which returns a metric of the similarity between the terms in the image description and several features in the image (including the concepts present in the image, concepts related to the image domain and any metadata in the image). The algorithm classifies the semantic similarity between the image and a description in a scale from 0 to 1. The backend will use this additional information, combined with the amount of times a description has been used, to sort the list of descriptions of the same language of the users' device or browser that are sent back to the client.

We also anticipate that, after SONAAR has been in use for some time, one or a few descriptions of an image will become more popular than the others. By using a system that relies on the number of times a description has been used to sort the list of suggestions, there is a chance that a new description, that might be better than existing descriptions, will not be presented to users because it is at the bottom of the list. In addition to using the quality measure to assist in sorting the list, we will implement a mechanism to minimize this problem. When a user selects one description from a list, the selected one has its count increased and the other that have not been selected have their counts decreased. Bad descriptions that never get selected will eventually have negative counts, and new descriptions, by being initialised at zero count, will be above the bad descriptions in the sorting order and therefore will have a higher likelihood to enter the suggestions list.

Finally, future efforts will also be focused on the dissemination of SONAAR. We have been conducting dissemination activities on the ongoing project, even though further improvements on our resources will still be conducted during the next months, SONAAR can now be used by a larger community. At this moment, we have stable prototypes, supporting different sources of alternative descriptions, and a solid documentation on social media accessibility. For that, we will contact different associations of people with disabilities and communities offering relevant services, not only to participate in our user study, but also to freely use SONAAR. We will also contact publishers of current third-party social networking clients to assess the viability of collaborating towards a realistic sustainability for the SONAAR prototypes.

Annex I: Documentation for authoring of accessible content

About SONAAR

SONAAR is a tool to support social network users in creating and consuming accessible media content. The Android application and the Chrome extension are now available in the respective stores.

In this page you will find information on accessible practices and a guide on how to use SONAAR to improve the accessibility of your media content on Twitter and Facebook.

We invite you to use SONAAR and share your experience with us. You can reach us by email at sonaar@fc.ul.pt or using Twitter at @sonaarproject.

Why think about accessibility

Accessibility means that every person should be able to have equal access to information and services regardless of their capabilities. In our daily lives it is possible to identify several accessibility strategies such as ramps for wheelchair users or handrails for people with reduced mobility. That is no different for services provided online. Different accessibility strategies are also employed to ensure that everyone is able to fully participate in the web. One particularity of online environments is the mixed role found in some of them, especially in social networks. On these platforms, we are not only consuming information but also providing it and, for that, the responsibility of providing accessible content is shared with all its users. Social networks play an essential role connecting people, being more important than ever and people with disabilities are also interacting with major platforms, such as Twitter and Facebook, as everyone else. These platforms provide different resources for users to improve the accessibility of their content. However, in this case, technologies can be used to complement human power but these environments are only fully accessible with an active participation of content authors, that means, users.

How people with visual disabilities are accessing your content on social networks

Most people with visual disabilities use a screen reader software to access the web. This software transcribes the content of the screen into a textual version and then synthesizes it into audio. Concerning visual content, textual descriptions are used to provide an alternative form to interpret the content. Images descriptions will be read aloud to non-sighted or low-sighted users who rely on screen readers to consume social media content. However, most visual content is not properly described, due to the mixed role previously mentioned. That means that some people are being deprived of fully participating in this aspect of modern life - such as interacting with friends, exchanging information online and others. This contributes to the feeling of exclusion and incapacity already perpetuated over the years.

Major social networks already provide a feature for users to enter this description, and some of them, such as Facebook or Instagram, will provide an automatic description for images by default and the users can improve it in order to provide more details about it. Twitter provides an input field when the user is uploading an image. When screen reader users encounter an image on their social networks, three scenarios are possible:

  • No description is provided and this user is not able to understand that image;
  • An automated description is provided and the user is able to understand some concepts that may be in that image;
  • A proper description is provided by the author, containing specific details about the image and the intention of that image. In this case, a visually impaired user can understand the image and even interact with that post.

Most of the time, visually impaired users are faced with the first or the second scenario described, having no proper information to really understand an image.

How to improve your content to provide better access for people with disabilities

Besides providing a textual description for your images, further steps can be taken to improve the accessibility of social networks:

Dos:

  • Provide meaningful alternative text for images: Write in simple, precise language, and keep the explanation brief. Typically no more than a few words are necessary, though rarely a short sentence or two may be appropriate. Make sure an image communicates your intended purpose.
  • Caption videos: Add a caption file, or use the post's description area to add alternative text to caption video posts.
  • Hashtags: Put hashtags after the image description, or in a comment/ post description if you have many hashtags. Capitalize the first letter of each word in a hashtag.
  • GIFs: There is very little support for animated GIFs on social media platforms. Do not rely solely on animated GIFs to convey content.
  • Emojis: Screen readers will read emojis appropriately, the 👏 emoji, for example, will be read aloud as “clapping hands”.

Don’ts:

  • Avoid the use of phrases such as "image of ..." or "graphic of ..."
  • Do not overuse emojis.
  • Emoticons, or representations of expressions created through a variety of keystrokes, e.g., :), will be read as “semicolon parenthesis” and should be used sparingly if at all.

Examples

The same image might need different descriptions depending on the context it's used. In a social media context, a good description provides the details the user wants to convey according to the purpose of sharing that image. We provide some examples:

A very happy little caramel dog lying in the flowery grass with his tongue out.

Description: A very happy little caramel dog lying in the flowery grass with his tongue out.

Several boats lined up in the port of Marseille with the city in the background and a wide blue sky with a few clouds.

Description: Several boats lined up in the port of Marseille with the city in the background and a wide blue sky with a few clouds.

How SONAAR will improve the accessibility of media content

SONAAR aims to provide social networks users with an easier and accessible authoring process for accessible media content. SONAAR will indicate to its users how to include image descriptions on Twitter and Facebook and provide some suggestions for every image being uploaded on these platforms. With SONAAR, users will have more support to provide accessible media content, and, for that, we expect to improve the amount and quality of image descriptions in social networks. With the use of SONAAR, we also expect that users will be more engaged and more aware of accessible practices, including them in their daily routines. SONAAR also supports users consuming images, offering suggestions for images outside of the context of social networks. The same suggestions provided for the authoring process are offered by request when the user encounters an image on a web page or on a mobile application.

How to use SONAAR

SONAAR aims to guide users into creating and consuming accessible media content.

Authoring

In order to help users to provide descriptions for their images, as soon as the service is started, there is no need for further steps. SONAAR will detect when an image is being uploaded on Twitter or Facebook and will then provide some suggestions of descriptions and concepts that may be useful to construct a proper alternative description. SONAAR will also indicate every step users have to take to provide this description. When a new description is provided by the user, we will store it in order to improve future suggestions. But we don't collect any identifying information from our users.

Consuming

Users will also be able to ask SONAAR for descriptions when encountering an image on a web page or a mobile application. On Android devices, users can share an image with SONAAR and a list of descriptions and concepts will be provided. On the web extension, users can ask SONAAR to analyze the web page and a description will be embedded in all the images on the page. To make it easier for users to navigate to the images to check the newly added descriptions, SONAAR also makes all the images focusable by the keyboard.

Download the Google Chrome web extension

The web extension was developed and tested on the Chrome browser, but is also supported on chromium based browsers like Edge, Brave, Opera or Vivaldi. The current version of SONAAR is available for download on the Chrome Web store.

Download the Android application

The SONAAR mobile service was developed and tested on a Google Pixel 2 running Android 11. In order to work correctly, the service must be run on an Android device running at least Android 9 and with the language set to English or Portuguese. The current version of SONAAR is available for download on the Google Play store.

Official documentation by platforms

Contact us

We invite you to use SONAAR and share your experience with us. You can reach us by email at sonaar@fc.ul.pt or on Twitter at @sonaarproject.