Pages

Monday, July 17, 2023

TeleKinect: Enabling Collaborative Interactions in a Virtual Space

Abstract:

TeleKinect is an advanced platform that facilitates collaborative interactions in a virtual space, enabling seamless long-distance collaborations. By utilizing real-time video transport capabilities, TeleKinect empowers users to merge foreground elements from remote locations into a unified virtual environment. In this position article, we present two innovative applications that leverage the TeleKinect framework to foster shared experiences over long distances. WaaZam focuses on enhancing social engagement through creative play, while InReach explores the manipulation of virtual objects and 3D data in a shared digital workspace.


Introduction:

In an era of globalization, individuals, teams, and families often find themselves geographically scattered. Remote collaborations have become an integral part of everyday life, necessitating the development of more effective systems to bridge the distance. While technology has made significant advancements, challenges such as disconnected physical and virtual spaces, limited gesture-based communication tools, and restricted manipulation of shared content still persist. It is crucial to distinguish between two types of remote experiences.

The first type involves crucial decision-making meetings, where the emphasis is on creating a sense of being in the same physical space, facilitating face-to-face interactions and interpersonal connections. This requires realistic representations of remote individuals in terms of size and gaze direction, as well as extending the remote physical space into the local environment. The second type encompasses creative sessions focused on collaborative creation and modification of digital content. Here, the focus is on the data itself and the shared space.

WAAZAM: Empowering Creative Collaboration

As our social circles expand across vast distances, the demand for synchronous creative interactions becomes more prominent. Unfortunately, existing communication platforms do not adequately support co-creative activities over long distances. Particularly in the realms of theatre and dance, where participants must directly coordinate with one another, the potential of creative telepresence systems remains largely unexplored. WaaZam is an innovative telepresence platform that prioritizes creative collaboration within a composited video environment. By incorporating depth analysis, object tracking, and gestural interaction, WaaZam offers users greater creative control during live sessions.


Throughout the years, artists and technologists have developed various strategies to foster engagement with interactive content. Remarkable examples include early artificial reality experiments by Myron Krueger and transformative mirrors by David Rokeby. In the research domain, composited video environments such as Reflexion by Agamanolis, PhotoMirror by Markopoulos, and the HyperMirror project have successfully merged distant spaces on a single screen. Building upon this foundation, our application focuses on identifying the tools and scenarios that best facilitate social engagement through creative play, especially between parents and children. Additionally, it enables collaborative customization of the environment, fostering a sense of togetherness.

INREACH: Bridging the Interpersonal Space and Shared Workspace

We aim to seamlessly integrate the interpersonal space and shared workspace into a unified, cohesive experience. InReach (Figure 3) presents collaborators with a shared virtual space where their live 3D recreated meshes are displayed side by side on a large screen, creating the illusion of an augmented mirror (Figure 4). This mirror allows users to observe themselves and their collaborators within the models and information they are collectively working on, breaking down the virtual barriers that separate collaborators in face-to-face settings. Collaborators can effortlessly point to data or 3D models, interact with digital objects using their bare hands, and manipulate them through translation, scaling, and rotation. We distinguish between one-handed and bimanual actions, showcasing these interactions in contrast to the traditional view offered by remote conferencing.

Figure 3:


Figure 4:

InReach proves particularly valuable in situations where users rely on bodily gestures to indicate and manipulate data while simultaneously visualizing their own presence in relation to that data. Relevant prior work in this field includes ClearBoard, a system that facilitates collaborative drawing on a shared virtual surface by two remote users. The concept of "the office of the future" envisions collaborative manipulation of virtual 3D objects from physical office desks, extending the real office through projected images of a remote office and virtual objects. MirageTable simulates the scenario of two collaborators working together at a table, enabling the creation of virtual replicas of real objects. Digital representations of each user's hands can then interact with these virtual objects within a physical simulation. ARCADE enables remote video-based presentations, allowing the placement and direct manipulation of virtual 3D objects by the remote collaborator.

Discussion and Future Work:

The initial feedback from users of both applications has been promising, motivating us to enhance our hardware precision, improve and evaluate gestural capabilities, simulate more realistic physics interactions, and explore our system in diverse contexts. In future work, WaaZam will present case studies that delve into specific user experiences, elucidating the challenges associated with creative coordination over long distances. Our researchers are actively studying ways to support and encourage storytelling, pretend-play, and improvisation among parents and children in divorced families and families with a traveling parent.

Furthermore, we are conducting a comprehensive user study to assess the effectiveness of customization features and explore the potential of these environments to foster social engagement through creative play and shared activities. As for InReach, our long-term goal is to investigate the integration of our system in industries that heavily rely on 3D models in their design processes, such as architecture, interior design, landscape design, as well as fields where collaboration revolves around bodily activities like theatre and dance. Additionally, we aim to leverage the shared virtual space to facilitate joint presentations involving two or more remote collaborators.

 

Hashtag/Keyword/Labels:

TeleKinect, collaborative interaction, remote collaboration, virtual space, shared experiences, WaaZam, InReach

 

References/Resources:

1. Benko, H., Jota, R., and Wilson, A. MirageTable: freehand interaction on a projected augmented reality tabletop. CHI 2012.

2. Cullinan, C. and Agamanolis, S. Reflexion: a responsive virtual mirror. UIST 2002.

3. Krueger, Myron: Artificial Reality, Addison-Westly, 1991.

4. Hiroshi Ishii, Minoru Kobayashi, and Kazuho Arita. 1994. Iterative design of seamless collaboration media. Commun. ACM 37.

 

For more such Seminar articles click index – Computer Science Seminar Articles list-2023.

[All images are taken from Google Search or respective reference sites.]

 

…till next post, bye-bye and take care.

Sunday, July 16, 2023

Smart Voting System Support through Face Recognition

Abstract

This article presents a novel authentication technique for online voting systems using facial recognition of voters. Presently, India follows two types of voting systems: secret ballot paper and Electronic Voting Machines (EVM). However, both methods have their limitations and drawbacks. Online voting is yet to be implemented in India, and the existing voting system lacks adequate safety and security measures. It requires voters to visit multiple polling booths, resulting in long queues and missed voting opportunities. Additionally, the current system allows ineligible voters to fraudulently cast their votes, leading to numerous issues. Therefore, this project proposes a highly effective and secure voting system. Our approach incorporates three levels of security in the voting process. 



The first level involves verifying the Unique ID number (UID), followed by the verification of the Election ID number (EID) at the second level. Finally, the third level utilizes face recognition or face matching. By implementing these security measures, our system significantly enhances the security level for each voter. We improve the user authentication process by incorporating face recognition into the application, which accurately determines whether a user is authorized or not.

 

Introduction

 

In India, there are currently two methods of voting. The first method involves a secret ballot paper, which uses numerous paper sheets. The second method is Electronic Voting Machines (EVM), which have been in use since 2003. However, there is a need to propose a more secure method for online voting compared to the existing system. In this article, we propose the use of face detection and recognition to identify the correct person. Our proposed system incorporates three levels of verification for voters. The first level verifies the Unique ID number, the second level verifies the Election Commission ID or voter card number, and the third level utilizes face recognition to match the captured image with the database of face images provided by the Election Commission. If the captured image matches the respective image in the database, the voter is allowed to cast their vote in the election.

 

The existing voting system relies on ballot machines with symbols representing various political parties. By pressing the button with the symbol of the desired party, the vote is cast. However, this system allows for the possibility of fake votes. Individuals may use fraudulent voting cards to cast their votes, resulting in problems. Moreover, voters have to travel long distances to their constituencies to cast their votes. Hence, there is a need for an effective method to identify fraudulent voters during the voting process. Our proposed system addresses these issues and enables voters to cast their votes online, eliminating the need for physical travel.

 

Proposed System

 

In our article concept, we employ three different security levels:

 

Level 1: Unique ID Number (UID)

– During the voter registration process, the system requests a unique ID from the voter. The entered unique ID is verified against the database provided by the Election Commission.

 

Level 2: Election Commission ID Card Number

– In the second level of verification, the voter must enter the Election Commission ID or voter's ID number. The entered ID number is verified against the database provided by the Election Commission.

 

Level 3: Face Recognition with Respective Election Commission ID Number

– This level utilizes the Eigenface algorithm to verify the facial image of the voters from the database provided by the Election Commission.

 

Eigenface Algorithm

 

The Eigenface algorithm follows an appearance-based approach to face recognition. It captures the variation in a collection of face images and encodes individual faces based on this information. The encoded images are compared with the collection of face images in a holistic manner. The Eigenfaces form a basis set of all images used to construct the covariance matrix. A smaller set of basis images is used to represent the original training images, resulting in dimension reduction. By comparing how faces are represented by the basis set, classification can be achieved. Face images are projected into a feature space called "Face Space," which best encodes the variation among known face images. The face space is defined by the Eigenfaces, which are the eigenvectors of the set of faces.

 

Working of Eigenface Algorithm

 

The working flow of the system using the Eigenface algorithm is as follows:

 

1. Initialization: Acquire the training set and calculate Eigenfaces (using PCA projections) that define the Eigen space.

2. When a new face is encountered, calculate its weight.

3. Determine if the image is a face.

4. If it is a face, classify the weight pattern as known or unknown.

5. If the same unknown face is seen several times, incorporate it into known faces (learning process).

6. Principal Component Analysis: Eigenface follows the Principal Component Analysis approach, where the face space forms a cluster in image space.

 

Experiment and Results

 

For our experiments, we utilized facial images from the ORL database, consisting of 16 persons with 10 views each. The training set contained 16×7 images.

Working Flow of the System

 

The working flow of the system involves the following steps:

 

1. Registration: Every new user in India must register for voting. At the time of registration, the system captures the user's face using a web camera and stores the face sample in the server database for security purposes.

2. During the election, three levels of security are implemented: unique ID verification, voter ID verification, and face recognition.

3. The system verifies the entered unique ID and voter ID to ensure their accuracy.

4. If the unique ID and voter ID are correct, the system captures the voter's image and compares it with the respective image in the database or server.

5. If the captured image matches the image in the database, the voter is allowed to cast their vote.

6. On the voting page, buttons representing the participating political parties are displayed. Voters can cast their votes in the election.

7. Once a voter has cast their vote, their ID is automatically logged out, ensuring that each voter can only cast one vote.

8. During the vote counting process, only authorized users from the Election Commission can log in using a secure ID and password. If both the ID and password are correct, the voting process continues.

Conclusion

 

The existing voting system in India suffers from several defects, such as a lengthy process, time-consuming procedures, lack of security, potential for bogus voting, and inadequate security measures. However, our proposed approach offers a highly secure and useful alternative to the existing system. By incorporating three levels of security, we can easily identify false voters and prevent bogus votes during elections. The facial authentication technique plays a crucial role in identifying fraudulent voters, ensuring the integrity of the electoral process. With our proposed smart voting system, voters can cast their votes from anywhere with internet access. This system requires a one-time investment for the government and reduces the need for manpower and resources. The centralized repository allows for easy accessibility of data and enables data backup. The smart voting system provides real-time, updated results, and the database can be updated annually or before each election to enroll new eligible citizens and remove deceased individuals from the voter list.

 

Hashtag/Keyword/Labels:

Smart Voting System, Face Recognition, Authentication, Online Voting, Security, Voter Identification

 

References/Resources:

1. L.Vetrivendan, Dr.R.Viswanathan, J.AngelinBlessy. "Smart Voting System Support through Face Recognition." Seminarsonly.com.

   Link: https://www.seminarsonly.com/computer%20science/smart-voting-system.php

 

For more such Seminar articles click index – Computer Science Seminar Articles list-2023.

[All images are taken from Google Search or respective reference sites.]

 

…till next post, bye-bye and take care.