Mobile service for authoring of accessible content
 
SONAAR - Social Networks Accessible Authoring

Mobile service for authoring of accessible content

Deliverable 7.1 (Mobile service for authoring of accessible content)

Document Technical Details

Document Number: D7.1
Document title: Mobile service for authoring of accessible content
Version: 1.0
Document status: Final Version
Work package/task: WP1/Task 1.2
Delivery type: Report
Due date of deliverable: July 31, 2021
Actual date of submission: February 1, 2021
Confidentiality: Public

Document History

Version Date Status Author Description
0.1 22/01/2021 Draft Letícia Seixas Pereira First draft
0.2 27/01/2021 Draft André Santos Installation instructions
0.3 27/01/2021 Draft Carlos Duarte Funcionalities
0.4 27/01/2021 Draft Carlos Duarte Next steps
0.5 27/01/2021 Draft Letícia Seixas Pereira Introduction
0.6 29/01/2021 Draft João Guerreiro Review
0.7 01/02/2021 Draft André Rodrigues Review
0.8 01/02/2021 Draft José Coelho Review
1.0 01/02/2021 Final Carlos Duarte Final version

Contents

Introduction

SONAAR aims to facilitate the user-generation of accessible content on social network services by developing a solution that supports the authoring and consumption of media content on social platforms in both desktop and mobile devices. In addition to improving the accessibility of this content, the proposed solution has also the potential to raise awareness to the importance of authoring accessible content by engaging users in accessible authoring practices.

This deliverable concerns the first work package of the SONAAR project, aiming to facilitate authoring content in compliance with the Web Accessibility Directive (WAD). The findings obtained and reported in Deliverable D1 and Deliverable D4 on the accessibility barriers and motivational factors to publishing accessible media content on social networks outlined the design and requirements choices needed to redesign the current interaction flow to better support accessible practices.

The workflow proposed by SONAAR aims to address two different scenarios currently identified in major platforms. For services providing machine-generated alternative descriptions, SONAAR intends to classify the quality of these text alternatives to prompt users to improve these descriptions. For social network services not providing it, SONAAR suggests keywords to include in the text alternative, thus easing the authoring process.

This report details the technical aspects of one of the two prototypes proposed to support accessible content authoring. As specified in Task 1.2, this prototype targets a major mobile operating system, i.e. Android, and one of the two different social network services we’d intended to target. In particular, in this prototype we focus on Twitter as it does not provide machine-generated alternative text and, as discussed in D1 and D4, is widely adopted by blind users. Together with the SONAAR backend service described in Deliverable D5, this prototype supports end-users in the creation of accessible media content for social media platforms.

This document is structured as follow: Section 2 describes the functionalities deployed in this prototype. Section 3 details the setup instructions to be followed in order to run the current version of SONAAR mobile service. Section 4 presents the next steps foreseen in order to evolve the deployed prototype to better meet users needs as well as SONAAR overall goals.

Functionalities description

The initial version of the SONAAR mobile service demonstrates the core features that will be supported by this prototype. These features are:

Detect authoring of social network content including images

The SONAAR mobile service is capable of detecting the Twitter application screen where a user authors a tweet with an image. The detection is based on the screen’s elements. If Twitter alters the elements on this screen, the detection algorithm needs to be updated.

Suggest alternative texts for image being shared on the social network

During the tweet authoring process, the SONAAR mobile service is capable of suggesting existing alternative texts for the same image, if that image is already present in the SONAAR backend. The service requests the alternative texts from the SONAAR backend, based on the image the user uploaded to Twitter. To have access to the image being tweeted, a screenshot is captured after all the required elements are present in the image upload screen. This screenshot is then sent to the SONAAR backend to retrieve alternative text suggestions.

Store the alternative text for image being shared on the social network

The SONAAR mobile service listens for activation of the “Tweet” button on the Twitter application. When an activation is detected, the service checks for the presence of an alternative text. If an alternative text is detected, and its content is different from any suggestion the SONAAR service has provided, the image and corresponding alternative text are sent to the SONAAR backend for storing.

Setup instructions

The SONAAR mobile service was developed and tested on a Google Pixel 2 running Android 11. In order to work correctly, the service must be run on an Android device running at least Android 9 and with the language set to English.

How to install:

  1. Download the apk file from https://github.com/SONAARProject/mobile-client;
  2. Enable “install from unknown sources” on the app you start the installation from;
  3. Open the app, accept the storage permission and enable the accessibility service.

Next steps

With the core features provided by this version of the SONAAR mobile service, the next versions will focus on three main aspects:

  • Extend the range of features to a second social network: Facebook.
  • The current features work without requesting user intervention (except having to install the mobile service). Even though this is a possibility that needs to be considered, we learned from the user studies reported in D1 and D4 that we need to increase awareness of the problems associated with authoring inaccessible content in social platforms. We plan to study what are the best ways to integrate these features more prominently on the social networks interfaces. The changes to the UI will aim to study how can motivational aspects be explored to increase the amount of accessible content authored, as well as ways to overcome existing barriers through, for instance, interactive tutorials.
  • Currently the backend service suggests alternative texts when they are already present in the alternative text library, i.e. when another user has previously authored content with the “same” image (see description in D5). The service we are currently using to find “equivalent” images also provides a listing of the key subjects and concepts present in the image. By building upon such information, we plan to extend our alternative text suggestions to situations where we have no previous alternative text stored.
  • The same mechanism introduced in the previous point can be explored towards another goal. Having available, on the one hand, the key subjects of an image and, on the other hand, an alternative text written by the content author, we can think about inferring the quality of the proposed alternative text through measures of semantic similarity across different media. Based on this similarity metric we can provide users with an indication of the quality of the alternative text they have written.

The same steps will be applicable to and benefit also the SONAAR browser extension reported in D6.1.