Journal of Mobile Multimedia https://journals.riverpublishers.com/index.php/JMM <div class="JL3"> <div class="journalboxline"> <p><strong>Journal of Mobile Multimedia</strong></p> <p>Mobile Multimedia has become an integral part of our lives. A vast variety of mobile multimedia services like mobile Internet, social media and networks, mobile commerce and transactions, mobile video conferencing, video and audio streaming, mobile gaming, interactive virtual and augmented reality, smart city, and Internet of Things, has already shaped the expectations towards mobile devices, infrastructure, applications and services, and international standards. Further open technological challenges remain, from limited battery life to limited spectrum accommodating heterogeneous data, increases in quality of service, user experience, context-aware adaptation to the environment, or the ever-present security and privacy issues.&nbsp;</p> <div class="JL3"> <div class="journalboxline"> <p><br>When autonomous vehicles, unmanned aerial vehicles, and robots bring artificial intelligence to our daily life, Communication/Navigation and Sensing for Services (CONASENSE) together with machine learning, big data analysis, sensor networks and information fusion, context-aware and location aware intelligence, and multi-agent systems shall rapidly elevate technological horizon and enrich mobile multimedia from 5G to ever growing wireless networking and mobile computing.&nbsp;<br><br>The Journal of Mobile Multimedia (JMM) aims to provide a forum for the discussion and exchange of ideas and information by researchers, students, and professionals on the issues and challenges brought by the emerging networking and computing technologies for mobile applications and services, and the control and management of such networks to enable multimedia services and intelligent mobile computing applications.&nbsp;</p> </div> </div> <p>&nbsp;</p> </div> </div> RIVER Publishers en-US Journal of Mobile Multimedia 1550-4646 Feature-level Fusion vs. Score-level Fusion for Image Retrieval Based on Pre-trained Deep Neural Networks https://journals.riverpublishers.com/index.php/JMM/article/view/24029 <p>Today’s complex multimedia content made retrieving images similar to the user’s query from the database a challenging task. The performance of a Content-Based Image Retrieval System (CBIR) system highly depends on the image representation in a form of low-level features and similarity measurement. The traditional visual descriptors that do not provide good prior domain knowledge could lead to poor performance retrieval results. On the other hand, Deep Convolutional Neural Networks (DCNNs) have recently achieved a remarkable success as methods for image classification in various domains. Recently, pre-trained deep convolution neural networks on thousands of classes have the ability to extract very accurate and representative features which, in addition to classification, can also be successfully used in image retrieval systems. ResNet152, GoogLeNet and InceptionV3 are some of the effective and successful examples of pre-trained DCNNs recently applied in a computer vision tasks such as object recognition, clustering, and classification. In this paper, two approaches for a CBIR system, namely early fusion and late fusion, have been presented and compared. The early fusion utilizes concatenation of the features extracted by each possible pair of DCNNs, that is ResNet152-GoogLeNet, ResNet152-InceptionV3, and GoogLeNet-InceptionV3, and the late fusion apply CombSum method with Z-Score standardization to combine the score results provided by each DCNN of the aforementioned pairs. In the experiments on a popular WANG dataset it has been shown that late fusion approach slightly outperforms early fusion approach. The best performance of our experiments in terms of Average Precision (AP) for the top 20 results reaches 96.82%.</p> Nikolay Neshov Krasmir Tonchev Agata Manolova Vladimir Poulkov Georgi Balabanov Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 769 784 10.13052/jmm1550-4646.2041 Enhanced Authorship Verification for Textual Similarity with Siamese Deep Learning https://journals.riverpublishers.com/index.php/JMM/article/view/24855 <p>The internet is filled with documents written under false names or without revealing the author’s identity. Identifying the authorship of these documents can help decrease the success rate of potential criminals for financial or legal consequences. Most previous research on authorship verification focused on general text, but social media texts like tweets are more challenging since they are short, improperly structured, and cover a wide range of subjects. This paper proposes a new approach to determining textual similarity between these challenging messages. Inspired by the popularity of the Siamese networks in determining input similarity, four deep learning models based on this architecture were developed: a long-short-term memory (LSTM), a convolutional neural network (CNN), a combination of the two and a BERT model. These models were evaluated on a Twitter-based dataset, and the results show that the Siamese CNN-LSTM similarity model achieved the best performance with 0,97 accuracy.</p> Rebeh Imane Aouchiche Fatima Boumahdi Mohamed Abdelkarim Remmide Karim Hemina Amina Guendouz Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 821 844 10.13052/jmm1550-4646.2043 A Computer Vision-based Architecture for Remote Physical Rehabilitation https://journals.riverpublishers.com/index.php/JMM/article/view/24443 <p>The use of computer vision in healthcare is constantly growing and the application of these techniques in the context of physical rehabilitation can bring great benefits. In this work, a software architecture was proposed which, with the use of computer vision techniques, aims to assist in the treatment and remote diagnosis of patients undergoing physical rehabilitation. The architecture was developed to allow the system to be used on computers and mobile devices. In the proposed system, the user with a professional profile can register and prescribe exercises for their patients according to the treatment. Users with a patient profile can view and perform the exercises that were prescribed for them in the application, relying on the application’s help to visually assist them with proper execution. A field research and a qualitative assessment were carried out in order to verify the usability and effectiveness of the application from the users’ point of view, with a positive reception.</p> Daniel Muller Rezende Paulo Victor de Magalhães Rozatto Dauane Joice Nascimento de Almeida Filipe de Lima Namorato Tatiane Daniele dos Santos Rodrigo Luis de Souza da Silva Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 879 900 10.13052/jmm1550-4646.2045 Design an RF Up-Down Convertor using Software Defined Radio and GNU Radio https://journals.riverpublishers.com/index.php/JMM/article/view/24999 <p>There is a need to design of RF Up/Down- converter for (from C/Ku to/from L band) signals required for several applications. The proposed work presents a design of an RF Up/Down Converter that makes use of GNU Radio and Software-Defined Radios (SDRs). The converter helps in frequency translation between the C/Ku and L bands, leading to a cost-effective and versatile solution for RF signal processing. By using open-source GNU Radio software, the proposed system enhances accessibility, enabling its deployment in diverse applications, from satellite communication to radar systems. The converter’s unique features include real-time processing capabilities, customization through an intuitive graphical interface, and Python scripting. This idea presents the design considerations, signal processing techniques, and performance evaluation of the RF Up/Down Converter. The advantages of an open-source solution over other available alternatives are in terms of cost, flexibility, and rapid prototyping. Simulation and hardware results demonstrate the efficacy of the proposed work.</p> Aniket Thavai Mansi Subhedar Yash Thakur Mrunal Patil Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 901 916 10.13052/jmm1550-4646.2046 Cloud Replica Management-Based Hybrid Optimization https://journals.riverpublishers.com/index.php/JMM/article/view/24691 <p>This research addresses the challenges in cloud-based replica management by proposing a novel strategy employing a Genetically Implied Greywolf with Oppositional Learning (GIGOL) hybrid optimization technique. This approach optimizes multi-objectives such as response time, load balancing, availability, replication cost, and energy consumption, ensuring cost-effectiveness and energy efficiency. The GIGOL model integrates Genetic Algorithm, opposition learning, and Grey Wolf Optimization, aiming to achieve optimal replica placement. The study emphasizes resolving overhead issues through machine learning techniques for efficient cloud-based replica management. Performance evaluation showcases improvements in response time, load balancing, availability, replication cost, and energy consumption, highlighting the effectiveness of the proposed approach within budget constraints and management policies.</p> Mohamed Redha Djebbara Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 917 934 10.13052/jmm1550-4646.2047 Aspect Based Feature Extraction in Sentiment Analysis using Bi-GRU-LSTM Model https://journals.riverpublishers.com/index.php/JMM/article/view/26459 <p>In Natural Language Processing (NLP), Sentiment Analysis (SA) is a fundamental process which predicts the sentiment expressed in sentences. In contrast to conventional sentiment analysis, Aspect-Based Sentiment Analysis (ABSA) employs a more nuanced approach to assess the sentiment of individual aspects or components within a document or sentence. Its objective is to identify the sentiment polarity, such as positive, neutral, or negative, associated with particular elements disclosed within a sentence. This research introduces a novel sentiment analysis technique that proves to be more efficient in sentiment analysis compared to current methods. The suggested sentiment analysis method undergoes three key phases: 1. Pre-processing 2. Extraction of aspect sentiment and 3. Sentiment analysis classification. The input text data undergoes pre-processing through the implementation of four typical text normalization techniques, which include stemming, stop word elimination, lemmatization, and tokenization. By employing these methods, the provided text data is prepared and fed into the aspect sentiment extraction phase. During the aspect sentiment extraction phase, features are obtained through a series of steps, including enhanced ATE (Aspect Term Extraction), assessment of word length, and determination of cosine similarity. By following these steps, the relevant features are extracted on the basis of aspects and sentiments involved in the text data. Further, a hybrid classification model is proposed to classify sentiments. In this work, two of the Deep Learning (DL) classifiers, Bi-directional Gated Recurrent Unit (Bi-GRU) and Long Short-Term memory (LSTM) are used in proposing a hybrid classification model which classifies the sentiments effectively and provides accurate final predicted results. Moreover, the performance of proposed sentiment analysis technique is analyzed experimentally to show its efficacy over other models.</p> Shilpi Gupta Niraj Singhal Sheela Hundekari Kamal Upreti Anjali Gautam Pradeep Kumar Rajesh Verma Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 935 960 10.13052/jmm1550-4646.2048 Success Factors for Conceptual Digital Voting Model https://journals.riverpublishers.com/index.php/JMM/article/view/24895 <p><span class="fontstyle0">With the advancement of technology in the digital age, blockchain technology has evolved into a technology critical to delivering secure and reliable decentralized applications. An application that has brought blockchain technology is elections to close the gap in traditional elections for transparency and credibility However, in COVID-19, bringing this technology to change elections allows access to all citizens to be able to vote. This research uses a structural equation model (SEM) questionnaire to explore the success factors of election implementation using blockchain technology was analysed from 400 voters who responded to the questionnaire using Mplus Version 7. This research has prepared a conceptual model supporting the effecting factors in implementing an election system using blockchain technology with voters The researcher has created an electoral system using blockchain technology that is readily available. Technology acceptance factor and credibility were utilized from the voters’ point of view in 9 Factors. Those interested in applying the model to improve elections using blockchain technology can study to improve elections. In addition, Conceptual model ideas were used to develop a model for acceptance and trust in elections using blockchain technology.</span></p> Danai Dabpimjub Supaporn Kiattisin Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 785 820 10.13052/jmm1550-4646.2042 The Importance of Digital Adoption for Workforce in Various Sectors: A Comparative Analysis https://journals.riverpublishers.com/index.php/JMM/article/view/24791 <p>This article aims to explore the significance of applying digital adoption for the workforce in different sectors, including construction, manufacturing, services, and agriculture. By analysing globally accepted statistics and scholarly articles, we present a comprehensive overview of the impact of digital adoption of mobile application on both blue-collar and white-collar workers. The findings emphasize the transformative power of digital technologies and highlight the potential benefits and challenges associated with their implementation of digital application and mobile application in each sector. This research underscores the need for strategic planning and effective training to maximize the advantages of digital adoption across diverse industries.</p> Worachanatip Janthanu Smitti Darakorn Na Ayuthaya Supaporn Kiattisin Copyright (c) 2024 Journal of Mobile Multimedia 2024-10-01 2024-10-01 845 878 10.13052/jmm1550-4646.2044