![]() Integrating assistive technology into mainstream technology will make it more acceptable, and this calls for new approaches in technology design. The use of intelligent techniques that can learn and predict the most suitable adaptations autonomously presents potential solutions. This research proposes a technique that shifts the burden of adjusting the computer settings from the user to the computer with minimal user involvement. This approach can present further health challenges to the user, such as damaging their posture. In such cases, the burden of adapting to the technology resides with the user’s behavior. Some users will move closer or further from the screen depending on whether they are myopic or hyperopic. The net result is that many users have become accustomed to squinting or glaring their eyes to change the focus of items on the screen. ![]() Further, persons with mild disabilities tend to shun assistive technology because it underlines the disability, is associated with dependence, and degrades the user’s image, thus impairing social acceptance. ![]() Additionally, the degree of disabilities varies widely in severity, and often mild or undiagnosed disabilities go unsupported. Where user requirements are not known a priori or dynamically change, the approach is ineffective because it forces redeployment or reconstruction of the system. Here the user is required to acquire new technology or adapt existing technology using available tools before using the technology. Similarly, the user’s environment can temporarily influence their ability to read the information on a screen, for example, where the lighting in a room is poor or in a scenario where the screen is too close or too far.Ī popular approach for ensuring that technology addresses user disabilities is assistive technologies, which calls for specialized products that aim at partly compensating for the loss of autonomy experienced by disabled users. Users strain their eyes as they try to adjust to different display settings. As a user navigates through online content, the difference in content presentation introduces temporary visual challenges. This freedom often leads to the publishing of content that is difficult to read, even for users with no visual disabilities. Content developers use their discretion to identify fonts and backgrounds. The online eLearning trend requires teachers to post materials on learning management systems (LMS) without paying much attention to the content’s appearance. These websites do not always meet the basic requirements for visual ergonomics. The increased internet use for research requires users to navigate through web pages designed using different fonts, font sizes, font colors, and background colors. Future work should focus on detecting posture, ergonomics, or distance from the screen. ![]() We also provide the developed application, model source code, and adapted dataset used for further improvements in the area. Initial experimental results yielded a model with an accuracy of 77% and resulted in the adaptation of the user application based on the FER classification results. This study sourced and adapted popular open FER datasets for DES studies, trained convolutional neural network models for DES expression recognition, and designed a self-adaptive solution as a proof of concept. These approaches inspired the design of a solution that uses facial expression recognition (FER) techniques to detect DES and autonomously adapt the application to enhance the user’s experience. ![]() Recent trends in human-computer interaction and user experience have proposed voice or gesture-guided designs that present more effective and less intrusive automated solutions. A key adverse effect is digital eye strain (DES). The last two years have seen a rapid rise in the duration of time that both adults and children spend on screens, driven by the recent COVID-19 health pandemic. ![]()
0 Comments
Leave a Reply. |