Deliverable D10 (Final Project Report)
 
SONAAR - Social Networks Accessible Authoring

Final Project Report

Deliverable D10 (Final Project Report)

Period covered by the report: from February 2020 to July 2021

Document Technical Details

Document Number: D10
Document title: Final Project Report
Version: 2.0
Document status: Final version
Work package/task: WP4/Task 4.1
Delivery type: Report
Due date of deliverable: July 30, 2021
Actual date of submission: August 31, 2021
Confidentiality: Public

Document History

Version Date Status Author Description
0.1 28/08/2021 Draft Carlos Duarte First draft
0.2 29/08/2021 Draft Letícia Pereira Reviewed
0.3 30/08/2021 Draft André Rodrigues Reviewed
0.4 30/08/2021 Draft João Guerreiro Reviewed
0.5 31/08/2021 Draft Tiago Guerreiro Reviewed
1.0 31/08/2021 Final Carlos Duarte Final version
1.1 28/10/2021 Draft Letícia Seixas Pereira Lessons Learned
1.2 28/10/2021 Draft Letícia Seixas Pereira Responding to unsupported languages
1.3 29/10/2021 Draft Tiago Guerreiro Reviewed
1.4 01/11/2021 Draft Carlos Duarte Reviewed
2.0 02/11/2021 Final Letícia Seixas Pereira Final version

Executive Summary

This document is the final report of the SONAAR project. SONAAR is a pilot project that aims to demonstrate the feasibility of developing a solution that motivates and supports users of social network services in adopting accessible authoring practices when publishing images.

All SONAAR activities have been conducted according to the work plan, even though the schedule of some of the activities has been impacted by the ongoing COVID-19 pandemic. All deliverables have been submitted to the European Commission, and all milestones have been achieved. The recommendations from the interim review meeting have been addressed and the project’s objectives have been fulfilled.

SONAAR conducted two phases of user trials during the project. At the start of the project, a user study led us to identify the needs and barriers of social network users that prevent them from authoring their publications in an accessible manner. This guided us in establishing the requirements for a solution that could support users in the task of authoring content accessibly. We have prototyped and deployed two instances of this solution. One prototype targets the Android operating system and allows users of the Twitter and Facebook applications to receive notifications about the need to include a description when they are publishing content with images. Furthermore, suggestions of possible descriptions are presented to the users to make this task less demanding. The second prototype provides the same features, but for the desktop Google Chrome web browser as a browser extension.

This study, together with further findings along the project, allowed us to identify different scenarios in which the authoring of content in social networks can be improved to foster its accessibility. Our prototypes implemented some variations of these scenarios. Other workflows, not implemented in the prototypes, have been documented in multiple SONAAR deliverables, thus further contributing to future developments in this domain.

To support the two prototypes, SONAAR has developed a backend where images and their descriptions are stored. This backend allows searching images to provide descriptions from multiple sources for that image. A description is generated automatically taking into account the concepts that describe the image. Another description results from an automated OCR process that identifies text present in the image. The final source are previous descriptions that have been authored by other SONAAR users, potentially with the highest quality of all. The image searching and automated description generation mechanisms are based on an external commercial service.

Given that the number of descriptions available for the same image should start to increase as SONAAR is adopted by further users, we have included a quality measure to help sort same image descriptions and present to SONAAR users only the top quality descriptions.

With one of SONAAR’s goals achieved by supporting users in authoring accessible content, we updated the prototypes to achieve a second goal. By allowing users to request a description of images on any web page, or for any image in an Android application, we demonstrated that it is possible to deploy accessible content elsewhere on the web or mobile applications.

The second phase of users studies was conducted during the final months of the project. It aimed at validating the effectiveness of the piloted solutions developed in SONAAR. These activities were significantly impacted by the COVID-19 pandemic. Due to its restrictions we had to forfeit conducting trials in controlled environments. We redesigned the evaluation in order to be completed remotely. However, we were unable to recruit the desired number of participants. Even though we have positive feedback on both the experience provided by the prototypes and the effectiveness of the authored documentation, we cannot generalize our findings due to the limited number of participants.

Concerning dissemination, SONAAR was presented through multiple channels. Besides the project website and social media account, SONAAR was presented in one episode of the Mosen at Large podcast, in the Accessible Europe forum, and in multiple end-user online forums and mailing lists. Addressing the research community, SONAAR was promoted in several conferences, including a demonstration at the Web4All 2021 conference where it won the Delegates Award, and a paper presentation at the WebSci’21 AI and Inclusion (AAI) Workshop. An overview of the project was published in the ACM SIGACCESS Accessibility and Computing Newsletter and a publication is under review by the Behaviour & Information Technology journal.

Through the project’s activities we identified reasons for the lack of accessible content in social network publications. The outcomes of our pilots demonstrate that it is possible to have a positive impact on the accessibility of content authored by end-users. Furthermore, the analysis of the logs of how SONAAR is being used finds the impact is felt elsewhere, with most requests to the SONAAR backend coming from other web pages and applications, and not just for the authoring of accessible social media publications.

Even though SONAAR is a pilot project, we are investigating ways to keep the SONAAR services running in the future. Without funding, it is paramount that the commercial service used in SONAAR be replaced by another alternative that does not require funding. Through the dissemination activities we got in touch with the Media Verification team at ITI/CERTH which has developed services that could replace the commercial services we are currently using in SONAAR. At this time we are assessing their effectiveness. With a successful replacement of these services, we plan to be able to keep SONAAR running, and support, at least, the scenario where requests for image descriptions are made from any web page or Android application. The authoring scenario requires further maintenance effort, given the changes to the social networks interface that happen with some frequency. The preferable solution to tackle this problem would be to have the social networks themselves create the request for descriptions and send it to SONAAR. We are currently discussing this with Twitter, to understand how we can collaborate with them to improve the accessibility of the content on their social network.

Contents

Introduction

Context and overall objectives of the project

User-generated content plays a key role in social networking, allowing a more active participation, socialisation, and collaboration among users. The widespread usage of mobile phones contributes to the growth in publications containing visual content. However, the majority of this type of online content remains inaccessible to a part of the population, despite available tools to mitigate this source of exclusion.

SONAAR aims to facilitate the user-generation of accessible content on social network services by prototyping a solution that supports the authoring and consumption of accessible media content on social platforms in both desktop and mobile devices. In addition to improving the accessibility of this content, the proposed solution has also the potential to raise awareness to the importance of authoring accessible content by engaging users in accessible authoring practices. To achieve its major goal, SONAAR aims to:

  • Facilitate authoring of WAD-compliant content by enhancing current features that make it possible to author accessible content.
  • Deploy user-generated accessible content on mobile and web platforms by exploring a mechanism that distributes an image’s text alternative to all users of SONAAR.
  • Ensure an accessible content authoring process by an iterative design process to ensure an accessible interaction flow.
  • Engage users in the production of accessible content by disseminating accessible authoring practices and educating users on how to properly create accessible content.

Project progress

The SONAAR activities were able to demonstrate the technical feasibility of a solution that is able to support users of social network services in publishing media content in an accessible manner. A prototype solution was deployed that is capable of supporting users of two social network platforms (Twitter and Facebook) on two operational platforms (Android operating system and desktop Chrome web browser). Data collected in one platform is available to support users of another platform, therefore demonstrating the feasibility of having a solution that is cross-platform. Additionally, SONAAR extended its usefulness by supporting the wider context of allowing its users to request image descriptions on any web page or of any image on a mobile application, in the supported operational platforms. Finally, as part of the project, we have created documentation on the benefits of authoring accessible content and on what are the best practices for writing image descriptions.

The project progress was impacted by the safety measures put in place due to the COVID-19 pandemic. This impact was significant in the user research activities. It was felt in the initial user study planned, which delayed the project’s activities at its start. The main impact at this stage was on the project’s schedule, but this was mitigated with an extra effort in the technical developments. The impact was felt again in the planned final study. We had to adapt our planning that included tests in a controlled environment to become remote only. Still, we felt serious difficulties in recruiting participants for the final evaluation. This impacted the schedule of the project, with evaluation activities being extended until the project’s deadline, but also the outcomes of the project, given that we could not recruit the desired number of participants, and therefore, our findings cannot be generalized as anticipated.

WP1: Facilitate authoring of WAD-compliant content

Work package 1 aimed at creating an interaction flow for accessible content authoring. To meet this goal, an initial study with social network users was conducted. Through online surveys and interviews, we identified challenges in accessible content authoring and barriers preventing users from providing accessible media content in major social network services. This allowed us to define a number of requirements to guide further SONAAR activities. Our findings and considerations have been submitted on Deliverable D1. From there, successive iterations of the authoring interaction flow were designed and prototyped, as documented in Deliverables D6.1, D6.2, D6.3, D7.1, D7.2 and D7.3.

The final versions of the prototype allow users to receive suggestions of alternative texts for the images they upload. Two versions were developed, one targeting the Chrome desktop browser and the other the Android operating system. Each version supports two social networks: Twitter and Facebook. The prototype identifies when a user is publishing an image and prompts the user (through an on-page overlay on Chrome, and through a notification on Android) to enter a description for the image. To assist the user in the description writing task, the prototype offers description suggestions which the user can copy to the description input field in the page or application. The prototype also suggests the user to personalize the description to the user’s own context, in order to improve the description. Different types of suggestions can be provided by the prototype and these are labelled according to their type so that the user is aware of the suggestion’s origin. The types comprise suggestions authored by other users for the same image (if there are any), a suggestion created from the automatic identification of concepts relevant to the image, and the text in the image as recognised by an automatic OCR mechanism (if there is text in the image).

We conducted a final study with SONAAR users. As reported in D2, we had difficulty in recruiting the desired number of users to participate in the study, which limits the generalizability of our findings. Nevertheless, we found indications that SONAAR contributes to the participants’ perceptions of an improvement in their authoring experience.

Key results achieved

  • Identified existing limitations in the social media content authoring process
  • Developed and deployed prototypes for both the Chrome desktop browser and the Android operating system that are able to provide recommendations, from other users or automated, of text alternatives for images during the authoring process on both Twitter and Facebook.

WP1 progress

Relative to the activities and aims described in the DoW concerning WP1, we report on the progress achieved during the project.

Facilitating authoring of accessible content

Develop native services for smartphone devices that support end-users in the creation of accessible media content for social media applications

A service for Android devices was developed that supports users by prompting them to provide descriptions of images when an image being published on Twitter or Facebook is detected. From what we learned in the initial user study, the task of creating a description is perceived as being too cumbersome for users of social networks, who are more motivated to browse their timelines than write image descriptions for what they publish. To lower the effort required to complete this task, the service also supports its users by offering them suggestions of descriptions for the image being published.

Develop a browser extension that can detect when an end-user is posting content on a social media website

The set of features available in the Android service is also available through a browser extension for the Google Chrome browser. The level of support for authoring content is similar: the extension is able to detect when a user is authoring content with images, prompt the user to enter a description for the image, and offer a list of suggestions of descriptions for that image.

Develop a service that enables end-users to report that a particular interface has a media authoring interface

The exploration of the Twitter and Facebook interfaces with different users allowed us to understand that the interface presented to a user might have specificities that are not present for other users. These differences have invalidated the initial idea of trying to identify a unique template for applications with media-authoring interfaces. This gave way to a follow-up idea of a feature that would allow end-users to report that the extension or service is no longer working. The user would then be asked to activate the interface elements that SONAAR needs to recognize. By identifying those elements (through an observer on the browser extension and the tap coordinates on the mobile service), SONAAR would then be able to update the recognition process that it uses with the relevant user actions. SONAAR would be required to keep track of the different elements that might trigger the relevant actions, given the different interfaces that are presented to different users.

While we still believe that such a solution could mitigate the identified issue, this hasn’t been implemented in the scope of the project. Two major reasons have contributed to this decision: the first was the failure in getting the desired number of users to participate in the trials of the SONAAR prototypes. Given the low number of user trial participants in each of the platforms supporting the SONAAR prototypes, we anticipated a low usage and, therefore, a limited validation of this feature. The second reason was the possibility of having one of the social networks (Twitter) implementing the detection feature in their interface. Having the responsibility of detecting when an image is being published handled by the social network interface is the most robust solution, by being independent of interface changes. Conversations with the accessibility lead at Twitter opened up this possibility.

Trigger prompts to ask users for text alternatives when social media authoring interfaces do not do so; or support the end-user in the creation of accessible content

SONAAR successfully implemented different text alternative suggestions sources and presentation workflows. Text alternatives are available from what other users created for the same image, from a machine learning based analysis that reveals the most important concepts of the image being published, and from the recognition of any text in the image. A description quality appraisal algorithm was implemented that allows to rank the available descriptions by their quality. The quality is measured based on the semantic similarity between the concepts that have been identified in the image and the text of the description. Different workflows for presenting the suggestions have been implemented (e.g. presenting only the top ranked suggestion; presenting the top-n ranked suggestions; allowing the user to ask for more suggestions) but, given the limited number of participants in the prototype evaluation, we have not been able to assess the experience provided by them.

In instances where the social media platform enables the creation of text alternatives, our service will first guide end-users to enable such option, if they have not done so, through in-context tutorials, similar to the ones we have been actively developing for both sighted and blind people. Towards this goal, we also want to explore using assistive macros to perform the required step for them.

During the course of the SONAAR project, Twitter has made the “Add description” feature enabled by default and always present in the image tweet interface. This welcome change has rendered this goal void. Still, we have introduced in SONAAR an experience mimicking an in-context tutorial to guide users through the process of authoring their image tweets in an accessible way. In the browser extension, when SONAAR detects that a tweet contains an image, an overlay pointing to the “Add description” feature is displayed. This is followed by other overlays with the instructions to create the description, the suggestions of descriptions and links to the SONAAR documentation. In the mobile service, the instructions are provided through a series of notifications guiding the user.

We will explore techniques such as: adding recommendations directly to alt-text fields, create overlays with multiple autocomplete descriptions, other more invasive approaches where the text alternative is forced as a required field, reminders of missed text alternatives (e.g. a red overlay over shared images without text alternative) or even purposefully incorrect or provocative text alternatives to trigger end-users to edit.

SONAAR is able to provide suggestions through notifications (on mobile) or overlays (on the browser extension) that can be copied to the clipboard and which the user can then paste to the text description input box (and edit accordingly to the user’s needs and desires). We also endowed SONAAR with the ability to enter the text directly into the text box, but such a solution does not create awareness for the need of users to perform such activity (it is equivalent to automatically entering the alt-text in the image).

With the multiple sources of text descriptions, and multiple human authored descriptions for the same image, we needed to address the issue of deciding how to sort the presented descriptions. Automatically generated descriptions, and descriptions based on text recognized in the image are always presented with a label clearly indicating their origin. In what concerns the human authored descriptions, we developed a quality measure that we use to sort the descriptions for the same image and we implemented workflows that either present only the top ranked description (and allow the user to request the other descriptions) or that present the top five descriptions sorted by their quality.

We opted against exploring the invasive approaches, given that those would take away control from the user, which we argue would not help promote the aims of SONAAR.

We expect that different approaches may be appropriate to different individuals and thus maybe what will be needed is an adaptation of the interaction model depending on the user’s past text alternative authoring behaviours.

This solution implies gathering user data in order to develop user models. On the one hand, in order to properly support this approach, a substantial time-frame for data collection is required, which is beyond the resources available for the project. On the other hand, collecting user data may violate the privacy of SONAAR users. SONAAR collects images and alternative texts, without any association to the user that created either the image or the text alternative (we have an id associated to the specific installation, but that is not able to identify the same user using more than one device, for instance - or when the user uninstalls and reinstall the extension or the mobile service, a new id will be issued). To be able to create a user model we would need this association (and, therefore, it could become easy to identify persons in photographs, for example). This decision is reinforced by the concern shown by some participants in our initial study about data privacy, a highly discussed matter nowadays.

Deliverables

  • D1: Accessibility barriers to publishing content on social networks (M6)
  • D2: Final evaluation of the accessible content authoring prototypes (M18)
  • D6.1: Browser extension for authoring of accessible content - Initial version (M12)
  • D7.1: Mobile service for authoring of accessible content - Initial version (M12)

WP2: Deploy user-generated accessible content on mobile and web platforms

Work package 2 aimed at providing a mechanism to distribute an image’s alternative text to all SONAAR users. For that, a backend infrastructure to support user-generated accessible media content was deployed. This service is responsible for supporting the operation of both the SONAAR browser extension and the SONAAR mobile service developed in WP1, as previously described. First, when an alternative text is provided, SONAAR stores the pair including the image and the user-generated alternative text in a repository. Later, when an image (without alternative text) is displayed to a user of a social network platform, SONAAR queries this repository and displays the existing alternative text if the image is found. To support both the storing and querying of images, we initially worked on the development of a solution that evolved an existing in-house solution for media retrieval. However, this solution was not able to provide a level of service acceptable for real-time use on social networks (with some queries taking several minutes). As a result, we integrated an external solution, Clarifai, that we had already used in previous research work. Given that Clarifai is a commercial service, we have started looking for alternatives. At the conclusion of the project we are studying a set of services made available by the Media Verification team which have the potential to replace and augment the existing level of service provided by Clarifai and that can be hosted by us together with the SONAAR backend.

Key results achieved

  • Deployed a service capable of storing and retrieving alternative texts for images
  • Developed and deployed prototypes for both the Chrome desktop browser and the Android operating system that are able to provide text alternatives for images on any web page (extension) or application (mobile)

WP2 progress

Deploying the accessible content supporting backend

For authoring, front end clients will be able to request media authoring and sharing interface templates. With these templates it will be up to the clients to detect when media is being shared.

As previously mentioned, a template model implementation represents a current limitation due to the volatility of social networks interfaces. In order to overcome this, the adopted solution so far is based on identifying specific elements (e.g. Tweet button) that are present in all views of the interface. This solution has a limited applicability as it relies on us updating the prototypes every time Twitter/Facebook change their interface. The optimal solution will be to have the social network interface triggering the process to request the recommendations. We are currently talking to the Twitter accessibility team in order to understand how results from SONAAR can be integrated into the Twitter platform.

We will explore a variety of techniques to generate recommendations. For example, previous text alternatives attributed to the same content, performing a reverse image search to look for text alternatives; computer vision to generate text description; sending additional textual data (i.e. hashtags) and use them to search for images with text alternatives that have similarities, among others.

It is important for the SONAAR prototype to offer to social media users several options of alternative text generation with progressively better quality. SONAAR is capable of suggesting text alternatives attributed to the same content by identifying the same image in the SONAAR image repository. The Clarifai service allows us to perform a reverse image search in the SONAAR image repository. Therefore we can find similar images to the one the user is posting in the social network and suggest the text alternatives of those images.

SONAAR also implemented other strategies to provide text alternatives benefiting from a number of services provided by Clarifai: 1) computer vision to generate text description through a list of concepts with corresponding probabilities of confidence; and 2) text recognition to provide an alternative text for images containing text.

While we initially considered the possibility of using other sources of information for the generation or appraisal of text alternatives, these were eventually discarded. The use of image metadata is not feasible due to the way that images are captured in the Android service. Given that the image is captured via a screenshot of the social network screen while the user is publishing the image, SONAAR does not have access to the image itself, which means we cannot access the metadata of the image. The use of hashtags, or other information, in the social media post accompanying the image was discarded due to privacy concerns.

The use of Clarifai, a commercial service, represents a liability towards the maintenance of the SONAAR service after the funding period ends. We are currently exploring an alternative to replace the Clarifai service. Through the MediaVerse project we contacted the Media Verification team (MeVer) at ITI/CERTH. At this stage we are testing the Near-Duplicate Detection service with a view to replace the Clarifai service that is responsible for searching for the same image in our database. In the near future we will start testing an annotation service to replace the Clarifai service that provides the most relevant concepts for an image. The MeVer annotation service possesses an experimental service capable of captioning images, which might represent an additional source of image descriptions. Other services provided by MeVer that could be useful for SONAAR include a model to classify whether an image is a meme and the ability to process videos in addition to images.

For consumption, front end clients will be able to search for text alternatives previously associated with the image based on the hash codes calculated. The service will provide a list of text alternatives based on the ones previously registered for that particular hash code. Moreover, it will go through the same process to generate additional text alternatives with the same techniques described for recommendations.

The SONAAR backend makes available the description suggestions service also to be used by the prototypes in the extended context of any web page or mobile application. The service takes as input an image and provides a list (sorted by their quality) of the human authored descriptions it has stored for that image and the automated descriptions created from the concepts identified in the image and the text recognized in the image.

The backend service will make available a service to assess the quality of text alternatives.

We have updated the backend service to include a quality measure for image descriptions. This quality measure allows SONAAR to sort the image descriptions based on their quality and to reply to requests for image descriptions with only the top quality descriptions.

The quality measure and ordering is based on a number of factors. It begins with the list of concepts extracted from the image by Clarifai. We search for concepts present in the description of the image. We also search for synonyms of the concepts present in the description. To collect synonyms of the concepts we resort to an external service (DataMuse). The higher the number of concepts or synonyms in the description the higher its quality. Another factor is a semantic similarity score computed between the whole list of concepts and the description. To compute this semantic similarity score we resorted to another external service (Dandelion API) that is available for free for the level of service required by SONAAR. To avoid rating highly a description that is purely based on the automatically recognized concepts, we include a penalty factor that penalizes automatically generated descriptions. Finally, to sort the descriptions we give equal weight to the quality of the description and the percentage of times that description has been used in the past. In this way, we allow new descriptions with high quality to be presented to users and not be ranked low because they have not been used yet. At the same time, popular descriptions that might not be considered high quality by our algorithm (due to its limitations, described in D6.3 and D7.3) will also be presented to users.

Facilitating consumption of accessible content

In the two major smartphone operating systems (i.e. iOS and Android) it is currently not possible for a third-party service to access the media content on an application, even as an Accessibility Service on Android. As a consequence, we need to explore another option to provide the text alternatives to end-users.

The solution adopted by SONAAR to provide alternative texts for images in mobile applications not directly supported by SONAAR (i.e. Twitter or Facebook) is through the sharing mechanism provided by mobile devices. A user can request from SONAAR a description for any shareable image in any Android application. To do that, the user needs to use the standard Android sharing mechanism. When SONAAR is installed, it becomes a service for users to share images with. When an image is shared with the SONAAR service, the user will receive a notification with the description of that image. The image’s description results from the same process in the SONAAR backend that is used to provide the description recommendations to users that are authoring publications containing images in social networks.

The adopted solution has the added benefit of ensuring full user control about the images that are shared with SONAAR, therefore increasing the privacy of the service.

"Whenever Mary decides to check a description for a pushed text alternative, she will be provided with the alternative with the highest quality and the possibility to check the other alternatives."

SONAAR is able to sort multiple descriptions for the same image according to their quality as computed by our quality measure algorithm. Two workflows that take advantage of the sorting were implemented: one presents the user with the top quality result (and allows the user to request more descriptions) and the second presents the user with the top 5 quality results. While the workflows are implemented and available, due to the limited number of participants in the final user evaluation we have not been able to study the impact of the workflows in the user experience provided by SONAAR.

We will explore what information may improve users’ perceptions of the quality of the text alternatives

Toward this goal, SONAAR stores how many times a particular description was selected by authoring users. Once again, due to the limited number of participants in the final user evaluation, and therefore, the limited number of descriptions authored, we were not able to identify what are the significant factors that lead users to prefer one text alternative over others.

When a social media webpage is detected, the extension will try to find all images on the browser viewport.

In the browser extension it is somewhat trivial to identify all images present in the web page. SONAAR is able to detect all images present in the DOM tree, thereby also enabling its use on other web pages beyond social media. However, in order to avoid a possible scenario of query and resource usage overload, this functionality was implemented to only be triggered upon users request (by activating a feature in the extension). In addition, this implementation model also allows the users to decide when SONAAR will be used, thus maintaining their privacy.

Deliverables

  • D3: Towards a sustainable backend solution (M18)
  • D5: Backend service (M12)
  • D8: Browser extension for presentation of accessible content (M15)
  • D9: Mobile service for presentation of accessible content (M15)

WP3: Engage users in the production of accessible content

Work package 3 aimed at developing strategies to increase the awareness and engagement from users in creating accessible content. The user study described in WP1 was also invested in 1) exploring current motivational factors shown by users for engaging in such activities and 2) investigating which information can be used to increase users’ engagement. Our findings show that users are interested in producing accessible content but they find no general information and guidance on the steps they should take to achieve such a result.

During the project we created and published documentation focusing on the advantages of creating accessible content and on best practices for writing image descriptions. The documentation was published on the project’s website and disseminated via the prototypes and the project’s social media account.

For the final user study, one of the investigated aspects was about the characteristics of the created documentation. The study revealed that the documentation was clear, complete and helpful. Participants also revealed they felt this documentation has the potential to increase the motivation of those that have access to it, to adopt accessible authoring practices. It was interesting to find that participants suggested further ways to improve the documentation, including the inclusion of more user focused stories on how accessible content is advantageous for people with different disabilities, which, we agree, could lead to an increased inclusivity of SONAAR’s outcomes.

Key results achieved

  • Identified motivational factors to increase users’ engagement in accessible practices in social media
  • Published documentation on accessible authoring of media content

WP3 progress

SONAAR will prepare documentation for users of the social networks supported by the accessible authoring prototype to: 1) motivate users to author accessible content; and 2) educate users to author accessible content (e.g. by letting them know what should be in a text description).

We have prepared documentation on accessible authoring of social media content, especially focused on image’s text alternatives. We tried in our documentation to address some of the limitations we identified in the previously existing documentation (e.g. it is hard to find in the social media platforms, and most often very extensive making it hard to use and understand).

The documentation (prepared in 2 languages: English and Portuguese) has been disseminated through 3 main outlets: 1) the SONAAR website; 2) the SONAAR Twitter account; and 3) through mailing lists of end user associations and other stakeholder organizations (e.g. EDF).

Updating the authoring mechanisms for both web browsers and mobile devices so that the previously prepared documentation is available to the users of the authoring services

After the publication of the SONAAR documentation, the Chrome extension and Android service prototypes were updated to include links to the documentation.

To assess the motivational quality of the documentation, we will need to conduct longitudinal studies.

We had planned to conduct user evaluations of the final versions of the prototypes that would also include the evaluation of the prepared documentation. Unfortunately, as already presented, it was not possible to recruit a large enough sample of users for the planned evaluation, which also impacted the evaluation of the documentation. Due to this fact, we designed an additional activity specifically targeting the evaluation of the prepared documentation. Participants were requested to read through the documentation and then answer a questionnaire.

The results of this user evaluation revealed that the documentation was judged to be clear, complete and helpful. The effectiveness of the documentation was also demonstrated by its ability to teach something new about accessible content authoring to more than half of the study participants, even though more than two thirds of the participants reported already writing descriptions for images they post on social networks. Given the constraints felt to organize and run the study, we were not able to assess the motivational capabilities of the documentation in a longitudinal study. Still, the perception of the majority of the participants is that readers of the SONAAR documentation are likely to have an increased motivation to adopt accessible authoring practices.

Deliverables

  • D4: Understanding motivations for creating accessible media content (M9)
  • D6.2: Browser extension for authoring of accessible content - Update (M15)
  • D7.2: Mobile service for authoring of accessible content - Update (M15)
  • D6.3: Browser extension for authoring of accessible content - Final version (M18)
  • D7.3: Mobile service for authoring of accessible content - Final version (M18)

Responding to unsupported languages

SONAAR currently provides support for English and Portuguese languages. This support is provided at the interface level, i.e., SONAAR messages and notifications are provided in Portuguese or in English, according to the users' device or browser. In case none of these languages are identified, SONAAR is set in English by default. Also by default, SONAAR provides for all images the image concepts identified by Clarifai (in English) and OCR-based recognition of any text in the image. For the third source of descriptions, i.e., those previously provided by other users for the same image, SONAAR will only present to the user descriptions written in the same language as the one identified on users' device or browser.

Considering that, different internal tests were conducted in order to guarantee the proper functioning of this workflow. At this moment we were not able to identify any particular errors linked to this mechanism. However, a notification was included in the mobile service to properly inform users that possible issues may arise due to the use of a different language than those supported by SONAAR.

WP4: Project management

The objective of WP4 was to provide scientific, technical, administrative and financial management of the project. During the project lifetime, there was continued coordination among team members. To ease the coordination of project activities, the members used a shared project workspace to share project deliverables and other internal resources set up using Google Drive. Technical developments were stored and shared to any interested party through GitHub repositories. The project team met (online) regularly every week for a two hour period. Other communication was done through email. Specific details on the project management activities are provided in later sections of this report and the financial report.

Key results achieved

  • Timely project progress

Deliverables

  • D10: Final Project Report (M18)

Follow-up of recommendations and comments from previous review

Recommendation 1

Provide more information about study participants.

More details about study participants were provided in the revised Deliverable 4. In particular in section 4:

  • We added information about the recruitment process for the initial interviews and more details about the interviewees.
  • We added distribution of the survey answers by language, giving a broader perspective of the reach and the context of our survey.
  • We added a table with demographic information about the participants in the final interview.

We do not have data regarding other characteristics, such as country of origin, or computer skills. Since we already had a large number of questions in the form, we prioritized gathering information about the use of social networks and accessible practices and decided against a longer demographics questionnaire.

Recommendation 2

Develop an installer for the prototypes.

Since the beginning of April 2021 the SONAAR prototypes are available from the Google Play Store (the SONAAR mobile service) and the Chrome Web Store (the web extension). The prototypes were announced through the project’s website, Twitter account, a podcast interview and multiple mailing lists.

The average number of users with the Chrome extension installed from the period of April 7 to August 23 was fourteen. During the months of July and August the average number of users with the Chrome extension installed was 23. The maximum number of users with the Chrome extension installed was 25 (achieved on 3 separate occasions: July 22, July 24 and August 11).

The average number of users with the Android service installed from the period of April 15 to August 22 was five. During the months of July and August the average number of users with the Android service installed was 7. The maximum number of users with the Android service installed was 9 (achieved on 5 separate occasions: June 28, July 16, July 18, July 28 and August 11).

Recommendation 3

Follow development approaches that maximise the sustainability, extensibility and applicability of the project results.

SONAAR aims to support Twitter and Facebook interfaces, as explained in the description of work. Using resources to manage the authoring interfaces of more social networks will not serve to demonstrate the usefulness of the piloted solution in this Preparatory Action.

However, we agree that development approaches that can be generalized to multiple platforms can maximize the impact of the project. Towards meeting this goal we deployed a solution that allows SONAAR users to request a text alternative in whatever page/screen the user is in.

The final architecture promotes the extensibility of the project resources, therefore contributing to an increased likelihood that parts or the whole of the solution can keep being used. This can be seen in the decoupling between the SONAAR prototypes and the SONAAR backend. The backend published a REST API that is used by both SONAAR prototypes even though they run on different platforms. The same REST API can be used by other services to accomplish an aim similar to what is attained by the SONAAR prototypes. The backend also makes use of several services that are accessed through different REST APIs. In this fashion, it will be possible to, with a limited amount of effort, replace these services with other services that return similar results if there is a need to replace them.

In what concerns the specific recommendations suggested:

  • “Make technical choices in the development of the prototypes that will allow for the easy integration, now and in the future, of third party solutions for generating alternative text”: The architecture of the backend was designed to allow for easily extending or replacing image description services. We are currently taking advantage of that to study the possibility of replacing the services provided by Clarifai with services provided by the MeVer team. This will allow us to replace a commercial solution with one that does not require payments, which will allow us to sustain SONAAR operating after the end of the funding period. Furthermore, the MeVer team has additional services that we can explore to increase the quality of image descriptions and to explore the possibility to extend the SONAAR services to other domains, such as providing descriptions or captioning for videos.
  • “Attempt integration with well-established third-party solutions that allow access to popular social networking platforms”: As reported in D3, we contacted the organizations or individuals responsible for Bacon Reader, Twitterrific, Tweetbot, Tweetings, Plume, Easy Chirp, UberSocial, TweetCaster and Chicken Nugget. The organizations responsible for three of the applications declined our invitation to discuss the possibility of integrating the SONAAR prototyped mechanisms in their applications. The other organizations did not respond to our contact request.
  • “Tasker and Macrodroid automation applications that both offer native support for accessibility service-based automation”: By inspecting these applications, they do not seem capable of interacting with other applications, only offering automation at a system level (such as turning on the WiFi or adjusting the volume). This type of automation does not provide a solution for the issues we uncovered (for instance, images in mobile applications being presented as background of a view without any syntactic or semantic indication that the element is an image).
  • “OCR mechanism to recognize the contents of the screen”: This solution is language dependent and also susceptible to changes in the platforms’ user interface. Furthermore, identifying elements through a visual inspection of the screen is time and resource consuming, and could degrade the user experience. On the other hand, we have used OCR in the SONAAR backend to suggest text alternatives for images with text on them.
  • “Reacting to an intent of sharing a photo”: SONAAR now addresses photo sharing intent as part of the media consumption scenario of use. A user on any mobile application screen is able to share an image with SONAAR, in order to receive one, or more, descriptions for that image.
  • “API for exchanging multimedia data between different applications”: The Android 12 beta introduces an API that allows users to move any type of data to one application from different sources: clipboard, keyboard or drag and drop. For the SONAAR context, we could offer a user a possibility to move the image that is being posted in the social network application also to SONAAR in order to receive an image description. This would, however, require the user to select the image twice: one for the social media application, another for SONAAR. We have decided against prototyping such a feature given the extremely low penetration of Android 12 at this stage.
  • “Google work into the process of recognizing different icons based on neural learning”: The work done with IconNet represents a promising avenue by endowing icon recognition in Android devices. This would certainly be useful to help detect relevant screens in the social media authoring process. Unfortunately, at this stage, IconNet is deployed only on Android’s Voice Access application where it is able to recognize 31 different icons, but not available to be integrated into applications. There is also no information available on whether or not IconNet will be extended to detect other types of interface elements, such as buttons on application’s UI.

Recommendation 4

Integrate in the prototypes as many methods for the automatic generation of alternative text in the project as possible and demonstrate scalability of the project approach.

The final version of the SONAAR backend supports the following alternative text generation methods:

  • Search for alternative text previously added by other users and that is stored in the SONAAR database;
  • Automatic image recognition and recommendations through key concepts present in the media provided by Clarifai;
  • Text recognition provided by Clarifai;

All the solutions described are robust enough to be implemented even at large scale.

Furthermore, SONAAR integrates an image description quality measure to assist sorting multiple descriptions for the same image.

Recommendation 5

Further investigate engagement and motivational factors.

The user research conducted and reported in Deliverable D4 consisted in a preliminary investigation on the main reasons currently leading users to not provide alternative access to their content and what they considered to be a potential reason to engage in accessible practices. In this study, we found that people are interested in participating in the inclusion process, even though most are unaware of existing accessibility approaches, or, when they are aware, the extra effort required to author content in an accessible manner could become a barrier to the adoption of such approaches. This validated the need for a supporting service that would reduce the effort required to author content accessibly. SONAAR, by automatically identifying when a user is authoring media publications in social networks and by offering suggestions of descriptions for images, possesses the features targeting both the lack of knowledge of most social network users and the effort required to create accessible content. Additional features included in SONAAR, such as the ability to access documentation on the benefits of creating accessible content and the best ways to do it, were designed to increase both the motivation and engagement of users as well as the quality of the descriptions.

To investigate the effectiveness of the SONAAR features we planned another user research activity. Our plans considered different settings. One setting would consist of groups of users centered around a blind person. We would recruit blind participants who would assist us in further recruiting 4 or 5 of their close contacts, in particular those that publish media content that they usually consume. This would allow us to understand if SONAAR could motivate the close contacts to create accessible content more frequently. It would also allow the blind participants to judge if the quality of the descriptions increased in the presence of SONAAR. Another setting would consist of single participants that would be regular SONAAR users, from which we could find if SONAAR leads to an increase in the authoring of accessible content. In both settings we expected to be able to experiment changes in the prototypes to judge the effectiveness of different motivational strategies.

We tried to recruit participants for this user research through multiple channels: posts on social networks (the number of impressions of our Tweets surpassed 10 thousand impressions), mailing lists of organizations of users with disabilities, mailing lists of other EU funded projects in the accessibility and media domain, and reaching out to participants of the initial user study. Unfortunately, we were unable to engage users into participating in the planned study, and therefore we were unable to investigate the effectiveness of the deployed motivational strategies.

Even though the participants in the initial study stated they were interested in participating in the inclusion process, we learned that when asked to change their routine (social) behavior, that interest is not sufficient to prompt a change. For participating in the user research, all participants were required to fill a couple of questionnaires (one at the beginning of the study, the other at the end), install one of both SONAAR prototypes, and use their social networks as usual during a two week period. Even such a small change to their routine (participants were not requested to post more or less often than they usually do) was not well received.

Even though we have not learned about the effectiveness of our motivational strategies, we learned about the motivation of users of social networks. The main finding from our experience was that expecting people to actually change something in their workflows for the benefit of others appears not to be realistic. This reinforces the importance of integrating services such as the ones offered by SONAAR directly into the social networks interfaces, without requesting users to install an additional component.

From a motivational perspective, we consider to have invested significantly on our dissemination efforts, and reached out to a large audience (we cannot know the numbers reached through channels that we did not control, like the podcast or mailing list of the collaborating organizations). We did learn that, even with these efforts, the appeal to install and use a solution with SONAAR features was not enough. We envision two possibilities to address this problem. From the communication perspective, we argue that the message should be spread by an institution with higher visibility, and, probably, be the subject of a professional marketing campaign. This would increase the visibility and the weight of the campaign, but it is out of scope of the project. Still, given that in other domains related to the online presence (e.g. security) such campaigns have had limited usefulness, a second possibility to motivate users into engaging with accessible practices would be to make it a legal obligation. For example, the European Accessibility Act requires online stores to be accessible. By extending its scope to other types of online services then the service providers would be required to integrate these types of features into their offerings, resulting in increasingly accessible content being authored by everyone.

Recommendation 6

Define and clearly present in one of the final deliverables the different workflows that will be enabled by the SONAAR prototypes and the respective user interactions that will be supported depending on SONAAR’s functionality and technical limitations.

As the features supported by SONAAR were being developed, we designed and updated different workflows that explored their usage. These workflows have been documented in Deliverables D6.2, D6.3, D7.2 and D7.3.

Recommendation 7

Significantly intensify dissemination efforts, especially towards associations of people with disabilities and communities offering relevant services.

After the mid-term review we increased our dissemination efforts significantly, when compared to the period before the mid-term review. We contacted and published SONAAR information in the user forums suggested in the review report, as reported in Deliverable D2. We also contacted the news outlets suggested, but were not successful in publishing information about SONAAR. On the other hand, our contact with the Mosen At Large podcast was successful and SONAAR was presented in one of the podcast’s episodes. We also disseminated SONAAR through mailing lists and social media channels of organizations of people with disabilities, such as the European Disability Forum. SONAAR was also disseminated through the channels of the LEAD-ME COST Action that gathers media accessibility experts across Europe.

On a parallel track, the dissemination efforts in the scientific community also increased. In April we presented a demonstration of SONAAR at the Web4All Conference which was awarded the “Web Accessibility Challenge - Delegates Award”. In June we published “Suggesting Text Alternatives for Images in Social Media” in the SIGACCESS Newsletter and “Nipping Inaccessibility in the Bud: Opportunities and Challenges of Accessible Media Content Authoring” in the WebSci’21 AI and Inclusion (AAI) Workshop.

Recommendation 8

Develop and report on a realistic sustainability for the SONAAR prototypes.

As described in the answer to recommendation 3, our efforts to collaborate with third-party social networking clients were not fruitful.

We also contacted Andrew Hayward, a lead accessibility engineer at Twitter. We got to introduce SONNAR, which was met with a positive reaction. Accessibility engineers were already exploring a notification system to warn Twitter users about the need to include image description with their image tweets. SONAAR offers a solution that could support such a system, and from Twitter’s perspective it might be useful to explore SONAAR’s findings. Additionally, the Machine Learning group at Twitter could also be interested in the collaboration and integrating some of their solutions with SONAAR. In the coming weeks we expect to hold further discussions.

The Twitter solution represents the best possible solution for ensuring the sustainability of SONAAR because it can address the biggest problem faced in the SONAAR development: changes to the social network’s interface that cause the image sharing detection algorithm to stop recognizing when a user is sharing an image. By having the social network itself triggering the SONAAR service the image sharing detection would no longer be necessary.

In parallel to the conversations with Twitter, we also engaged with the MeVer team, which has the ability to support us with tools that can replace the Clarifai services. We are currently starting to test the MeVer Near-Duplicate Detection (NDD) in order to replace the image similarity service provided by Clarifai. Following a successful integration with the NDD service we will proceed to test other MeVer provided services that would lead to a full replacement of the Clarifai provided services. By removing the Clarifai paid services from SONAAR, we would be able to sustain SONAAR operating from our premises for a longer period of time.

Taking this into consideration, a sustainable SONAAR solution could be one of the following options:

  • The existing SONAAR backend with the commercial services replaced with non-commercial services. The backend publishes a REST API to be used by front-end services. The two SONAAR prototypes (for the Chrome browser and the Android platform) can remain operational to answer user initiated requests for image descriptions. The authoring detection mechanisms in the prototypes would need to be maintained. Given the open nature of SONAAR, this could be performed by the community of developers that express interest, and would be seeded by us. Other developers could create their front-end solutions (e.g. for the iOS platform) and connect to the backend provided API.
  • The SONAAR solution would be integrated in the social network services themselves. This could happen at two levels: the first is the inclusion of all SONAAR components. The second would be the inclusion of only the front-end features, while the backend would remain independent. This would have the advantage of easily connecting multiple social networks through the same backend. It would have the disadvantage of requiring the backend to be migrated to a server or service with the required resources to handle the significant amount of requests that would need to be answered.

Recommendation 9

Make sure that the deliverables themselves are accessible.

We apologize for providing initial reports that were not fully accessible. We have created a Markdown based template that we now use to generate the reports in standard HTML format that is accessible. Additionally, these can be exported to PDF if needed (albeit PDF remains a not completely accessible format). All the reports have been generated with this template and have been made available in HTML and PDF formats.

Progress beyond the state of the art and potential impacts

Summary of progress towards objectives and results achieved

Obj1: Facilitate authoring of WAD-compliant content

To meet the first SONAAR objective, we created prototypes that demonstrate the technical feasibility of assisting users of social networks in authoring media publications in an accessible manner. The prototypes successfully met this objective by:

  • Being able to detect when a user of a social network is authoring content with images. This feature allows the prototypes to prompt the user to add a description to the image being published, therefore increasing the accessibility of the authored content. The SONAAR prototypes integrate a solution that looks for specific elements (identified through different characteristics, depending on the platform) to detect when the user of a social network is in the process of publishing content that includes an image. The deployed solution demonstrates the feasibility of the designed approach. It required identifying different sets of characteristics, given that the same social network can present a slightly different user interface to different users. Additionally, it also highlighted the need to react to changes to the user interfaces of the social networks, given that the recognition process is based on the characteristics of the user interface elements. This issue strongly impacted the support to one of the social networks (Facebook) which has its user interface updated with a much higher frequency than the other supported social network.
  • Being able to provide suggestions of text alternatives for the images being published. This feature lessens the effort of users in writing a description by having a suggestion they can improve instead of starting from a blank slate, therefore increasing the likelihood that a description will be added to the image being published. The SONAAR prototypes send the image being published to the SONAAR backend, which replies with a list of potential descriptions for that image. The backend uses multiple sources to prepare its answer. Descriptions are created from the concepts recognized in the image by a machine learning solution; from text recognized in the image by an OCR solution; and from descriptions authored by other social network users for the same image. The SONAAR backend includes a description quality assessment algorithm that allows it to sort the descriptions for the same image based on their quality, and present to the user only the descriptions of higher quality.
  • Being available in more than one social network. This demonstrates it is possible to reuse, in one social network, text alternatives created in other social networks. This leads to a higher availability of descriptions everywhere, by tapping from multiple human sources.
  • Being available on more than one platform. This demonstrates it is possible to reuse, in one platform, text alternatives created in other platforms. Together with the previous point, this development shows it is possible to deploy a service with the characteristics of SONAAR in a ubiquitous manner. The major advantages from this are the capability to collect descriptions from the largest amount or sources possible, and the possibility to offer suggestions in the largest amount of social networks possible, therefore increasing the amount of accessible content in as many services as possible. The main drawback of this is the amount of resources needed to keep the mechanisms that detect when a user is publishing an image up to date. If several social networks update their user interfaces, in multiple platforms, frequently, then the effort involved in updating the SONAAR prototypes could increase significantly. For this reason, the SONAAR backend is decoupled from the SONAAR prototypes, which means the suggestion mechanism is independent of the detection mechanism. Therefore, a solution where the social network could query the SONAAR backend is feasible at this moment, and would remove the need to update the detection mechanism, by delegating that responsibility to the social network itself.

Obj2: Deploy user-generated accessible content on mobile and web platforms

To meet the second SONAAR objective, we augmented our prototypes to be able to provide image descriptions in any web page or mobile application screen. The prototypes successfully met this objective by:

  • Answering user initiated requests for image descriptions. This feature demonstrates that it is possible to use the descriptions harvested or created in the social networks context to increase the accessibility of content in other contexts, including other social networks. To enable this feature we extended the range of services offered by the SONAAR backend. The service that provided descriptions for an image during the social network publishing process was extended to process requests that provided access to the image in multiple ways. The standard service used in the authoring process receives the images in a byte stream, since those images are not yet online and, therefore, do not have a URI. By extending this service to receive multiple configurations of requests it became possible to extend the functionalities of the SONAAR prototypes. We extended the SONAAR prototypes with a feature that allows the user to initiate a request for image descriptions. Different from the automatically triggered process in the social media publishing context, we decided against making this an automated process for several reasons: the user would not control what images are sent to the SONAAR backend, violating the user’s privacy; if the user is browsing a web page with many images (e.g. a page with the results of an image search), it would create many requests to the SONAAR backend, and, therefore, might impact mobile traffic usage of the user; the user might already be happy with the existing description for an image and not require a new description, which would make the request irrelevant. By placing the user in control, the SONAAR prototypes allow users to decide when they want to receive descriptions for the web page or mobile application screen they are consulting. Additionally, it is possible for users to select if they want to receive descriptions only for images that do not already have a description, or for all the images irrespective of having or not a description already.
  • Being available on web and Android platforms. This demonstrates that it is possible to deploy such a feature in two platforms with different technical requirements. On the web, the user initiates a request for descriptions for all the images on the page, or for only the images that do not already have a description. After receiving feedback that the request has been processed the user can then navigate through the images on the page (SONAAR makes them focusable) and listen to the description of any of the images on the page. On Android devices, it is not possible to identify images in the same way that images are identifiable on the DOM of a web page. For this reason, the installation of the SONAAR Android prototype creates a service with which images can be shared. Whenever a user wishes to receive a description for an image (either because the image does not have one, or because the user is not happy with the existing description), the user only needs to share the image with SONAAR using the standard Android sharing mechanism. The SONAAR prototype sends the image to the SONAAR backend and, upon receiving the answer with the descriptions, notifies the user. By reading the notification, the user accesses the descriptions (from the multiple sources).

Obj3: Ensure an accessible content authoring process

To meet the third SONAAR objective, we studied the main barriers preventing users from authoring accessible content on social networks, and implemented in the SONAAR prototypes an authoring workflow that addresses the major barriers. The study and prototypes successfully met this objective by:

  • Understanding the challenges the users of social networks face when uploading media content and the barriers preventing them from providing accessible content. Through our user study we identified multiple barriers:
    • Users are not aware of the steps they can take to make their content more accessible and they found no guidance on major platforms to assist them in this process.
    • Even when users search for guidance they report difficulty in learning about or in finding accessibility features.
    • Accessibility practices are perceived by many users as an activity that requires a significant additional effort on their part.
    • Some users still have a certain stigma associated with accessibility, arguing that accessibility should be employed only when necessary or, even worse, that accessibility compliance may compromise their current experience on social media.
    • Platforms not making this a requirement or not providing a proper prompt warning may also be contributing to this first line of thought.
    • There is still a lack of support for blind users to create accessible media content as they find no features to assist them in this activity.
  • Deploying an authoring workflow that prompts users to include descriptions when they are publishing an image and that offers suggestions of descriptions. This workflow addresses the main barriers of lack of awareness to accessibility practices and the perception of this being an activity that requires significant effort. Additionally, the SONAAR prototypes include references to the SONAAR documentation on good practices for creating image descriptions, addressing the need for guidance that participants in the study reported.

Obj4: Engage users in the production of accessible content

To meet the fourth SONAAR objective, we created documentation on the benefits of authoring accessible content and on the best practices for writing image descriptions. We updated the SONAAR prototypes with links to the documentation on accessible content authoring. Afterwards, we disseminated the SONAAR prototypes and documentation through multiple channels. The documentation and dissemination efforts successfully met this objective by:

  • Exposing the concept of accessible authoring in the social media context to at least 10 thousand social media users. This is the number of impressions that the SONAAR tweets have achieved. This does not account for impressions created by Tweets (or Facebook posts) that have been created by other social media users, given that we do not have access to those numbers. It also does not account for the reach of the Mosen at Large podcast, or the mailing list messages read, for which we also do not have access to the numbers reached. Taking this into account, it can be expected that the number of persons exposed to the concept of accessible authoring is higher.
  • Producing documentation capable of educating users to create social media publications with images in an accessible way. The user evaluation of the documentation created in the project revealed that more than half of the participants learned something new about accessible content creation. This is especially relevant when one considers that more than two thirds of the participants in the study reported already writing descriptions for images in the publications, therefore, likely being more educated about this process than most social network users.

While we managed to evaluate the effectiveness of the documentation through a dedicated user study (reported in Deliverable D2), we were not able to successfully motivate the adoption of the SONAAR prototypes by the number of users that we were aiming at. Still, our findings are useful in understanding that most social media users are not motivated enough to adopt accessible authoring practices by themselves, at least if that requires them to take the first step and install a new extension or application to support them in that task. For that reason, having the social network themselves offering a set of features similar to the ones offered by SONAAR, or powered by SONAAR, seems to be the most likely way to get users to adopt accessible authoring practices. After the project conclusion, we are exploring this possibility with Twitter.

Potential impacts

Improve the accessibility of published media content

As identified in our initial user study, blind users are not being provided with enough information to properly interpret media content on major social network services. In this context, the SONAAR prototypes have shown that it is possible to 1) improve the accessibility of images in websites or mobile applications where the content authors did not provide accessible content and 2) benefit from text alternatives created by other end-users, improving the quality of the accessible content. As a result, SONAAR’s prototypes have the potential to improve the accessibility of published media content, benefiting millions of people if deployed on a large scale.

Educate users to the advantages of accessible content

Another finding resulting from the initial user study concerns the unawareness of social media users about accessible practices. Despite the efforts conducted by major platforms in providing accessibility features, end-users are still not aware of the possibility and the benefits of creating text alternatives for their visual content. By educating users to the importance of authoring accessible content and how to do it properly, people could be willing to incorporate accessible authoring practices in their daily publishing routine, possibly extending these practices to other social networks used, reaching a wider scope. In SONAAR we created documentation to demonstrate the benefits of accessible practices and engaged in dissemination activities to promote the documentation and prototypes. In the user study conducted to assess the effectiveness of the documentation, more than half or participants revealed to have learned something new from reading the documentation and that they felt this documentation could motivate users to adopt accessible authoring practices. However, we could not demonstrate that the documentation and prototypes lead to the adoption of accessible authoring practices beyond the social networks supported in SONAAR, due to low engagement in the user research activities we organized towards the end of the project. Nevertheless, our results hint that educating people is an approach with the potential to create content with higher accessibility than simply relying on automated approaches to make content more accessible.

Demonstrate the efficacy of user augmentation tools for accessibility purposes

Current efforts conducted by social network services are not yet sufficient to provide blind users with contextual information to convey the full meaning of certain media content. In this context, authors providing their own descriptions for their images have the opportunity to share relevant details and their own intention for sharing that media. While alternative descriptions provided by the users themselves are a better solution quality-wise, machine-generated descriptions may be used to facilitate the authoring process, having also the advantage that it may be deployed on a large-scale. SONAAR hybrid approach demonstrates the efficacy of user augmentation tools for accessibility purposes.

Improve image search and classification algorithms

Complementing previous considerations, machine-generated descriptions are highly relevant in accessibility for media content. The SONAAR collection of images annotated with text alternatives can be used to further improve the quality of machine-generated descriptions. In a more comprehensive context, these annotations may also be used to improve image search and classification algorithms. This impact is something that we expect to be able to gauge after the project. The two collaborations that have been discussed in this deliverable (with Twitter and the Media Verification team) have expressed that one of the potential benefits would be to improve their image identification, classification and description machine learning based capabilities.

Lessons Learned

This exploratory project shedded light on some of the current issues on social media accessibility. In the following we will discuss some of the main challenges faced during this project and some of the lessons learned from them.

Community engagement

SONAAR aimed at exploring mechanisms that enable the authoring and consumption of accessible media content on social networks. The proposed solution is capable of generating a number of alternative descriptions for a given image. These come from different sources, such as OCR-based recognition of any text in the image, concepts automatically extracted from the image, and other text alternatives provided by previous users for the same image. This set of descriptions is presented to users on any web page or any application screen. In addition, they are also presented for users when an image is uploaded on Twitter or Facebook, along with an indication where they should provide an alternative description. In this case, these descriptions provide users with examples or indications of how to construct a meaningful alternative text.

As a starting point, a first user study was conducted to further understand the context in which the project stands and to gather the requirements needed to enhance the current authoring process on major platforms. On the one hand, most blind participants reinforced some well-known factors found in the literature, such as descriptions provided by humans are of better quality than automatic ones. This is especially true in the social media context. In general, images uploaded on social media carry a personal dimension that automatic approaches are not yet able to convey. Descriptions provided by the authors themselves, in general, contain more personal details in addition to the intended purpose of the author to share this image. However, blind users rarely encounter an image with a suitable alternative description on their social network stream. On the other hand, most users reported wanting to do the right thing but not having enough information or guidance to better understand accessibility context and needs.

In order to properly address the complexity of this context, the envisioned mechanism must be capable of, not only assisting users during the authoring process in creating meaningful alternative descriptions on social networks but also raising awareness to the importance of providing accessible content. With that, it is possible to have a community of users authoring descriptions, enabling SONAAR to provide different examples and references to other users wanting to improve the accessibility of their content.

While the pieces of this puzzle all seem to fit together, that is, people wanting to get more engaged in accessible practices and a mechanism that enables that engagement, one challenge remains, identifying the extent of effort and time people are willing to spend on accessibility in general.

The activities planned to validate the proposed solutions were: laboratory tests with users in order to assess the technical quality, longitudinal studies in the wild to assess the user experience aspects, and finally, diaries with regular interviews to discuss specific aspects of the interaction.

Due to the COVID-19 pandemic enforced restrictions that limited people mobility and gatherings, at least in Portugal, it was not possible to conduct user testing in the laboratory as planned. For that, we designed a new user study proposal, combining all factors to be evaluated, i.e., exploring the user experience of our prototypes but also the impact of collaborative use in the accessibility of the content shared by SONAAR users. Despite our efforts in disseminating this call for participants, we have not been able to recruit any participants. Therefore, we designed a new study requiring lower effort from participants. This study considered two stages, the first one being filling up a survey, and the second one using SONAAR for a period of time. We were only able to recruit nine participants for the study, and only one-third of the participants followed through to the end of the second phase, all of them blind users. It is worth highlighting that some of the participants who did not complete the study were from the accessibility field, working in well-known organizations and institutions. A number of factors may have contributed to this scenario.

  • Reacting to COVID-19 enforced limitations: Considering the impossibility of conducting face-to-face tests, the time spent to adapt the study compromised the time needed for the longitudinal studies in the wild. To make matters worse, we had toredesign the study twice, given that our initial redesigned study did not attract participants (the study was based on groups of users, each group centered around a blind participant that would recruit between 3 to 5 members of her/his social network that regularly posted image content). In hindsight, we probably could have reacted earlier, and redesigned the study in the safer way that was the final study design. We did consider that, but were (wrongly) influenced by the ease of recruiting participants for the survey and interviews study we did in the first months of the project. We expected a similar turnout for this study, which turned out to be a wrong expectation.
  • Balance between recruiting participants and raising awareness: One aspect that we aimed at assessing with our study was the motivation of people to install and use SONAAR. We discussed the use of financial incentives to recruit participants for our studies (we used financial incentives for the study of the quality of the SONAAR documentation) but decided against using financial incentives to recruit participants for a study that aimed at assessing motivational factors because it would have compromised its findings. When faced with this situation, future efforts should consider planning different studies, one focusing on technical aspects (in SONAAR, the equivalent to the laboratory based study) which would not be impacted by the financial incentives, and another for the motivational aspects. In SONAAR, this would have mitigated the lack of assessment but would have required further resources to plan, run and analyse two studies in approximately the same time frame, which weighed on our decision to have a single study.
  • Targeting multiple communities with the same dissemination effort: SONAAR targets two distinct groups of users. On the one hand, SONAAR targets social network users in general, because those are the users that need to create media content in an accessible manner. On the other hand, SONAAR targets people with visual impairments, who would benefit from the ability of SONAAR to add descriptions to images. Our dissemination and recruitment efforts were mainly directed to the second group, either directly by targeting organisations and forums of people with impairments, or indirectly through social media channels (both of the project and of the members of the projects) because of the social networks that are biased towards people with disabilities. Future efforts with a similar context should consider planning two dissemination strategies, adapted to the different groups. Our strategy, focused on the benefits for people with visual impairments, was more effective with this target group, with all of the participants that reached the end of the user study being blind. Having two separate dissemination strategies would require further effort and resources, however.
  • Supporting multiple languages: SONAAR’s contributions are highly dependent on language. We anticipated part of its impact by including a user language detection mechanism in both the desktop and mobile clients and storing the language of human provided descriptions so that we can present only the descriptions that are in the user’s language. However, in the initial stages of deployment, SONAAR is heavily reliant on the automatically generated descriptions. Unfortunately, these are only provided in English. This presented a problem to some of our study participants that commented on the lack of descriptions in Portuguese and how they could not make use of automatic descriptions in English. It also proved a dissuading factor for other would-be participants. We could have taken one of two approaches: focus all the work and dissemination efforts in the English language, or include a translation mechanism in the SONAAR platform. The former is not an accessible solution, and from the start of the project we aimed to support both English and Portuguese languages. In fact, for our first study we prepared four versions of the survey and interviews: English, Portuguese, French and Spanish. Unfortunately, we did not have the resources to support four different versions of the SONAAR clients. Future efforts where support for multiple languages is paramount should consider translation mechanisms from the start.
  • Increasing available content: SONAAR’s success relies on the amount of image descriptions that it can serve. We were hoping to recruit participants for the final study at a similar rate that we had for the initial study. That would have increased the amount of available descriptions. We also hoped to have groups of users from the same network of friends and family, which would ensure that images being shared in that network would get a higher likelihood of being described, therefore benefiting the visually impaired users in the study. The aforementioned translation mechanism would also contribute to increasing the amount of descriptions by reusing descriptions in one language for users of other languages. However, our expectations were not fulfilled. For the purpose of creating a more positive user experience in the context of the final user study, one option to increase available descriptions, given the small number of participants, would be to devote some effort to monitor the social feed of the participants and write descriptions for images in their feeds.

All things considered, it became clear that the popularization of accessibility is a great challenge. Providing the right tools is only one part of the process. It must be combined with proper regulation and, therefore, proper accountability of the main stakeholders involved in this process, in this case, social network platforms. During the project, it was possible to identify that people still perceive accessibility as a charitable act, and therefore as optional, rather than a legal and moral obligation. While further measures are not in place, small and voluntary-based communities will need to keep creating workarounds to adapt current technologies to handle a broader cultural and social problem.

SONAAR aimed at demonstrating the feasibility of a solution to facilitate authoring of accessible media content and of mechanisms to automatically update the contents presented to users with the accessible text alternatives. Despite the limitations of the preliminary results leading to the remaining questions related to some of the technical aspects of the prototypes, the biggest barrier lay before that: getting people to go one step further. We believe that future initiatives must consider the proper time and effort needed to build and promote a community around it. In addition to enabling a rich understanding of some of the technical aspects in a context of use, it also enables a stronger spreading to the urge of addressing the accessibility of social media content. Through these channels, it is possible to stress that such solutions are available and pressure those accountable for putting them into place.

In parallel with the project, other approaches emerged and evolved, particularly on Twitter. Bots like Alt-Text-Reminder or GetCaptions allow users to either be reminded when they fail to attribute a text description to a posted media, or to comment a tweet and receive an automatically generated textual description for that content. While these approaches have lower ambition than SONAAR, they are also less demanding and enable communities to improve the accessibility of the content shared among them. These solutions, although positive, may also have an impact on how people perceive new solutions that demand a slightly higher effort.

Service coverage

Considering the technical aspects of the proposed solution, some further considerations must also be taken into account to build a solid community. Given its collaborative aspect, the diversity of this context must be considered to achieve a more meaningful reach.

To demonstrate the feasibility of the proposed solution, this project targeted two major social platforms, Twitter and Facebook. It is important to highlight that, even just for these two platforms, there are several interface variations (different themes, languages, etc.), and workflows for media posting (media upload button, file drag-and-drop, post replies, etc.). Combined with the poor access provided by those platforms for external developments, any solution must currently deploy different workarounds to handle each one of those instances. Furthermore, platforms change on a regular basis, with new versions and personalization options that must be rapidly integrated to maintain the functionality of this mechanism. During this project, we have observed that Facebook very often modifies its interface, not as much for Twitter. Some of these modifications, while not very significant at the interface level, such as changing a button labeled as "Delete" to "Remove", are capable of disrupting the detection of interface elements. To this end, reliance on this identification strategy has proven to be very fragile and has high maintenance costs.

One possible way to work around this volatility issue is deploying a workflow enabling users to identify key elements on their own interface. These elements would be stored along with the information provided by other users, building a dataset of interface models. This proposal was further described in previous SONAAR deliverables. Another possible approach is to explore an independent solution that automatically identifies some key elements from an interface through some of the already existing screen recognition techniques. Both approaches could be also useful to expand current capabilities to other social media platforms.

Even though it is possible to envision a potential answer to this issue, the ideal solution would be platforms providing better native support for accessibility features. While researchers and users spend time and effort implementing workarounds to improve the accessibility of these platforms, other areas of research could benefit from extra attention. For instance, another important challenge identified during this project concerns language diversity. Current automatic description generation mechanisms only provide results in English. These platforms have a high level of penetration worldwide, and many of their users do not have English as their native language. Therefore, providing an automatic description in case of omission by the original author is not a viable solution for these users. Current research on description quality could also be focused on assessing the performance of automatically translated descriptions, for instance.

Engaging with third parties

As previously mentioned, accessibility is still seen as an optional feature, from a user perspective but also by major platforms. For example, in all of the major social networks, providing alternative descriptions for images is still an option, instead of a requirement. Some of them make it hard to find that option. The results of SONAAR could mitigate the lack of accessibility felt on social networks. As we have introduced in previous deliverables, SONAAR could be deployed outside the social network’s infrastructure, which raises the challenge of increasing awareness to SONAAR for general users, or could be part of specific social networks or third party social network applications. With the latter, raising awareness to SONAAR features will be easier, because it would be directly integrated with the social network’s interface without requiring the user to install any further components. For this to be possible, third parties have to be involved in any way.

From our experience, reaching out to third parties is not an easy task, as we did not get a response for most of our attempts. Still, we were able to reach out to one third party (Twitter) and learn from our attempts with the other third parties. In the following points we summarize our learnings for future initiatives:

  • Have the third parties as project partners: If this is a possibility, it is the best way to prevent this issue. By reaching out to third parties and convincing them about the value of the project ideas, there is a direct route to integrating the results in the target products. This is probably easier (and more attractive to third parties) for Innovation Actions than for the Pilot Projects, but it is still worth trying it. For SONAAR, however, this might imply being locked into a specific social network, which would limit the exploration we aimed at (being able to share descriptions across social networks). Still, this option requires getting in touch with the third party, even if at an earlier stage.
  • Reach out to specific people: In our experience, we were more successful when addressing a specific individual than when using a generic contact for the third party. Even not knowing someone at the third party, it is useful to previously try to identify specific individuals to reach out for (this could be done with some research on the social networks themselves, like LinkedIn). If the third party has a team that already addresses accessibility issues, target someone from that team to increase the likelihood of getting your message noticed. This is what we did when contacting Twitter, targeting a member of the Twitter accessibility team, that we identified through the Twitter platform.
  • Tailor your message to the company/individual: We believe that it is important to, not only to present a viable solution, with solid results and considerations, ready for further exploitation, but also potential side benefits and opportunities. For instance, the mechanism developed in this project assists users in authoring and consumption of accessible media content. Our preliminary results and user feedback may be valuable as a starting point for platforms to draw further strategies and adapt the proposed solution for their context. In addition, the data gathered in our backend enables further research on improving existing automatically-generated descriptions, users preferences, and image recognition techniques. Identifying what results, and opportunities, are better suited to the target audience and focusing the message on those, should increase the likelihood of the message being acted upon.

Deliverables

ID WP Title Planned Date Actual Date
D1 WP1 Accessibility barriers to publishing content on social networks 31 Jul 2020 8 Dec 2020
D2 WP1 Final evaluation of the accessible content authoring prototypes 31 Jul 2021 31 Aug 2021
D3 WP2 Towards a sustainable backend solution 31 Jul 2021 30 Jul 2021
D4 WP3 Understanding motivations for creating accessible media content 31 Oct 2020 8 Dec 2020
D5 WP2 Backend service 31 Jan 2021 1 Feb 2021
D6 Browser extension for authoring of accessible content
D6.1 WP1 Initial version 31 Jan 2021 1 Feb 2021
D6.2 WP3 Update 30 Apr 2021 30 Apr 2021
D6.3 WP3 Final version 31 Jul 2021 30 Jul 2021
D7 Mobile service for authoring of accessible content
D7.1 WP1 Initial version 31 Jan 2021 1 Feb 2021
D7.2 WP3 Update 30 Apr 2021 30 Apr 2021
D7.3 WP3 Final version 31 Jul 2021 30 Jul 2021
D8 WP2 Browser extension for presentation of accessible content 30 Apr 2021 30 Apr 2021
D9 WP2 Mobile service for presentation of accessible content 30 Apr 2021 30 Apr 2021
D10 WP4 Final project report 31 Jul 2021 31 Aug 2021

Milestones

ID Title Planned Date Status
MS1 Deploy the accessible content authoring supporting backend 31 Jan 2021 Achieved
MS2 Prototype for a new interaction flow for accessible content authoring 30 Apr 2021 Achieved
MS3 Prototype a new mechanism to deliver accessible content on the web and mobile 30 Apr 2021 Achieved
MS4 Create and disseminate documentation on the advantages and best practices of accessible content authoring 31 Jul 2021 Achieved

Critical risks

Foreseen risks

In the following we analyse how foreseen risks firstly identified in the SONAAR project proposal have impacted the project.

Updates to authoring interfaces

The SONAAR prototypes access interface elements of social media platforms, both on web browser and mobile devices, in order to provide the services for authoring accessi- ble content. However, these interfaces are frequently updated by some social network service providers. These updates require a corresponding update of the SONAAR user authoring process algorithms. During the project we needed to respond to updates to the Facebook interface more frequently than to the Twitter interface. There was an impact on the consumption of the project’s resources from this risk, but it was manageable. Moving forward, we will try to keep up with the updates, but we cannot guarantee the same level of readiness as we had during the project. We expect this to impact more Facebook than Twitter users.

Detecting authoring interfaces

While accessing interface elements on web pages does not represent a challenge for SONAAR development, the technical difficulties of identifying screens or elements on mobile devices are not so trivial. The developed prototype is able to recognize the mobile screen representation in order to properly initiate the authoring procedures required by the SONAAR service. Therefore, this risk had no impact on the project.

Proprietary libraries

The SONAAR prototype in mobile devices is supported by the accessibility services provided by Android. If these services are no longer supported, there is a risk that the access permission for third-party services will also no longer be supported. Until the end of the project, the service never ceased to be supported and we do not have any indication that this will be a problem in the future.

Browser extension limitations

The SONAAR prototype for the Chrome web browser is supported by a browser extension accessing the DOM tree in order to detect changes and events. If permissions granted to extensions are limited in the future, there is a risk that the access for accessing DOM tree is also limited. However, at present, there is no indication that this access will be limited by future browser updates. Therefore, we do not have any indications that this will be a problem in the future.

Unforeseen risks

An unforeseen risk has been identified that has impacted SONAAR activities.

COVID-19 pandemic

The COVID-19 pandemic has impacted SONAAR activities almost since the start of the project. This impact has been moderately felt at the project management level. For instance, there has not been a face to face meeting of the team during the whole project lifetime. This impact was mitigated with the adoption of remote collaboration tools and work practices. The pandemic did have a more pronounced impact in other SONAAR activities, especially in the user research activities. We had planned to conduct observation tasks to complement questionnaires and interviews. The past and current limitations preventing people to get together have forced us to adjust our methodology, and delayed the start and ending of the user research activities, leading to the first 2 project deliverables being handed after the planned date.

More importantly, it has significantly contributed to difficulties in conducting further SONAAR activities, including validating the new interaction flow and assessing the effectiveness of the documentation for accessible content authoring. In the SONAAR proposal, these activities would be conducted through laboratory tests using techniques such as thinking aloud and user observations. However, due to the COVID-19 pandemic, user validation was only conducted remotely. While the adjustments to the methodology allowed us to mitigate the impact at the level of the quality of the data, this risk had a significant impact on our ability to recruit participants for these validation activities, as reported in Deliverable D2.

Dissemination and exploitation of project results

Scientific publications

  • Letícia Seixas Pereira, José Coelho, André Rodrigues, João Guerreiro, Tiago Guerreiro, Carlos Duarte, Barriers and Opportunities to Accessible Social Media Content Authoring, Behaviour & Information Technology, Taylor & Francis (IF: 1.781) - under review
  • Carlos Duarte, Letícia Seixas Pereira, André Santos, João Vicente, André Rodrigues, João Guerreiro, José Coelho, and Tiago Guerreiro. 2021. Nipping Inaccessibility in the Bud: Opportunities and Challenges of Accessible Media Content Authoring. In 13th ACM Web Science Conference 2021 (WebSci '21). Association for Computing Machinery, New York, NY, USA, 3–9. DOI: https://doi.org/10.1145/3462741.3466644
  • Letícia Seixas Pereira, João Guerreiro, André Rodrigues, André Santos, João Vicente, José Coelho, Tiago Guerreiro, and Carlos Duarte. 2021. Suggesting text alternatives for images in social media. In ACM SIGACCESS Accessibility and Computing 130 : 1-6. DOI: https://doi.org/10.1145/3477315.3477318

Dissemination and communication activities

Research community

  • Web4All 2020: 17th International Web for All Conference: Automation for Accessibility (April 2020) - approximately 50 participants
  • ASSETS 2020: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (October 2020) - approximately 400 participants
  • COST Action LEAD-ME Winter Training School 2020 Media Accessibility: Communication for all (November 2020) - approximately 40 participants
  • Web4All 2021: 18th International Web for All Conference: Accessibility and Crisis (April 2021) - approximately 60 participants
    • Including a demonstration of SONAAR that won the Web Accessibility Challenge Delegates Award
  • WebSci’21 AI and Inclusion (AAI) Workshop: Overcoming accessibility gaps on the Social Web - approximately 20 participants

End-user community

The dissemination for the end-user community was initially based on creating an online presence and resources in order to facilitate reaching out to social media users and representatives of the social network services. A Twitter profile has been created for the project that distributed contents related to accessible authoring practices. The project website was also created to post announcements and project results. These online resources have been used to inform end-users about mechanisms to author accessible content. Later, a guide on accessible content authoring practices together with tutorial and usage scenarios was made available on the SONAAR website and publicized through the Twitter account.

After the publication of the prototypes on the Chrome Web Store and the Google Play Store further dissemination activities were undertaken:

  • The project goals and the prototypes were presented at the Accessible Europe 2021 event.
  • The prototypes were publicized by several end-user organizations, such as EDF (Belgium) or Fundação Raquel e Martin Sain (Portugal).
  • Information about the prototypes was also posted on several end-user forums, such as AudioGames or the Blind and Visually Impaired Community at Reddit.
  • The SONAAR prototypes were presented in one episode of the Mosen at Large podcast.

Intellectual property rights resulting from the project

This project does not have any Registered Intellectual Property Right yet and does not plan to have. All results will be made available open-source and free to use.

Innovation

Activities developed within the project Number Description
Prototypes 3
  • Accessible content authoring supporting backend
  • Browser extension for accessible content authoring
  • Mobile service for accessible content authoring
Trials 2
  • Online survey with social network users
  • Interviews with social networks users
Testing activities 2
  • User tests to validate the new interaction flow for accessible content authoring
  • User tests to validate the effectiveness of documentation for accessible content authoring

Will the project lead to launching one of the following into the market?

The SONAAR prototypes for the Android operating system and the desktop Chrome web browser are available open-source and free to use. The API provided by the SONAAR backend is also free to use.

  • New product (goods or services): 3
  • New process: 0
  • New method: 0

How many private companies in your project have introduced or are planning to introduce innovations?

  • Companies introducing innovation(s) new to the marker: 0
  • How many of these are SMEs? 0
  • Companies introducing innovation(s) new to the company: 0
  • How many of these are SMEs? 0

Gender

Beneficiaries Number of female researchers Number of male researchers Number of females in the workforce other than researchers Number of males in the workforce other than researchers
FCUL 1 8 1 0

Usage of resources

During the project duration, FCUL has spent a total amount of effort of 46,6 PMs. The amount spent is above the total planned amount of 40PM. The deviation between the planned and actual usage of human resources is explained by two major factors. The first one is the impact of the COVID-19 pandemic. The restrictions put in place in response to the pandemic required us to adapt all the planned evaluation activities involving end-users. Not only did we change the activities, but as a result of the modifications and the challenges in recruiting users we needed to spend more effort in recruitment activities. This mostly impacted WP1 and WP3. The second factor was a bigger investment in dissemination than what we had anticipated. This came as a consequence of the interim review recommendations, but also in response to the challenges we found in recruiting users for the evaluation activities. This change had an impact on all the technical work-packages.

In what concerns the effort spent per work package (Table 11.1), most efforts went into WP1 (running the two user studies, including the changes to the methodology, and preparing the initial versions of the browser extension and mobile service prototypes). WP2 and WP3 had similar amounts of effort, as planned. Most effort in WP2 went into the development of the backend (including the support for multiple sources of descriptions and the implementation of the quality measure), while in WP3 most effort went into the development and validation of the documentation.

Table 11.1: Planned and actual effort per work package

WP Planned Effort Actual Effort
WP1 16 PM 19.5 PM
WP2 10 PM 12 PM
WP3 12 PM 13.1 PM
WP4 2 PM 2 PM
Total 40 PM 46.6 PM

In what concerns the evolution of the effort spent, we had an average of 2.5PM during the first 12 months of the project (total amount of 29.8PM reported for the interim review), while for the last 6 months this average increased to 2.8PM, representing the increased effort put in place by the SONAAR project team.