Biblio
With the development of new technologies in the world, governments have tendency to make a communications with people and business with the help of such technologies. Electronic government (e-government) is defined as utilizing information technologies such as electronic networks, Internet and mobile phones by organizations and state institutions in order to making wide communication between citizens, business and different state institutions. Development of e-government starts with making website in order to share information with users and is considered as the main infrastructure for further development. Website assessment is considered as a way for improving service quality. Different international researches have introduced various indexes for website assessment, they only see some dimensions of website in their research. In this paper, the most important indexes for website quality assessment based on accurate review of previous studies are "Web design", "navigation", services", "maintenance and Support", "Citizens Participation", "Information Quality", "Privacy and Security", "Responsiveness", "Usability". Considering mentioned indexes in designing the website facilitates user interaction with the e-government websites.
Users search for multimedia content with different underlying motivations or intentions. Study of user search intentions is an emerging topic in information retrieval since understanding why a user is searching for a content is crucial for satisfying the user's need. In this paper, we aimed at automatically recognizing a user's intent for image search in the early stage of a search session. We designed seven different search scenarios under the intent conditions of finding items, re-finding items and entertainment. We collected facial expressions, physiological responses, eye gaze and implicit user interactions from 51 participants who performed seven different search tasks on a custom-built image retrieval platform. We analyzed the users' spontaneous and explicit reactions under different intent conditions. Finally, we trained machine learning models to predict users' search intentions from the visual content of the visited images, the user interactions and the spontaneous responses. After fusing the visual and user interaction features, our system achieved the F-1 score of 0.722 for classifying three classes in a user-independent cross-validation. We found that eye gaze and implicit user interactions, including mouse movements and keystrokes are the most informative features. Given that the most promising results are obtained by modalities that can be captured unobtrusively and online, the results demonstrate the feasibility of deploying such methods for improving multimedia retrieval platforms.