Recently, tomographic flow-cytometry in label-free modality has been demonstrated. The tomographic apparatus operates in Quantitative phase imaging (QPI) mode and gives the possibility to retrieve the 3D refractive index distribution of single cells flowing along a microfluidic channel. One of the challenging topic correlated to QPI is the need to extract subcellular compartments as QPI lacks the typical specificity guaranteed by standard fluorescence microscopy (FM). Here we show the possibility of retrieving the specific refractive index distribution of multiple sub-cellular organelles from the 3D tomograms of flowing cells. Furthermore, we show a novel model of representation and fruition of 3D tomograms, displayed in an immersive virtual reality (VR) environment. Thus, new scenarios can be opened not only for fruition but also for analyzing the quantitative measurements of the whole 3D structure of any and each cell.
KEYWORDS: Virtual reality, Intelligence systems, Augmented reality, System integration, Safety, Point clouds, Deep learning, Transportation, Standards development, Decision support systems
Maintenance in the railway context has today reached very high safety standards. Still, despite these high standards, the sector's goal remains to continue to use resources and technologies to achieve a total absence of accidents. Our study proposes creating an integrated monitoring system to support the awareness of a planning operator. The system consists of 3 blocks that collect data from the field, process them, and identify anomalies. Subsequently, these data are displayed interactively in a virtual environment that realistically reproduces the piece of railway line we are analyzing. The planning operator can navigate the virtual environment with extreme awareness and plan maintenance interventions. Finally, the prepared maintenance cards will be made available in augmented reality to support the maintainers in locating the intervention area and executing the task.
The term Metaverse was introduced in 1992. Lately, the concept has become a popular buzzword in the general public, thanks to Meta. And yet, its intrinsic reliance on the convergence between enabling technologies such as virtual reality, the internet, and social networks cannot be understated. It might prove a powerful tool for enhancing data discovery and interpretation, especially considering a collaborative setup. The work presented here aims to investigate the interaction between a real user and digital objects in a virtual world to understand which aspects of attention must be focused on to obtain a natural and comfortable use. Our work involved the generation of Metaballs and the interaction with them by a user in a virtual environment. Metaballs are particular implicit surfaces of arbitrary typology, widely used in Computer Graphics to model curved objects. The ”organic” look and feel of how they interact with each other and their resemblance to soft tissues have proved a natural fit for tasks such as surgery simulation. Still, these implicit surfaces can represent even real objects at an even smaller scale, like cells of living organisms, and the ability to comfortably interact with this supersized version of real objects could open the door to new possibilities.
Carbon Fiber Reinforced Plastic (CFRP) is an important material in manufacturing, in particular in automotive and aerospace industry. Its relevance is due to its lightness, resistance and rigidity. In particular the lightness/resistance ratio is the property that will give to CFRP a favourable future in more and more applications, especially where the lightness and resistance are essential. On the other end CFRP is expensive and difficult to produce. During the production process of CFRP’s artefacts some defects may occur (e.g. material inclusions or bubbles), making the component not usable. When applicable, CFRP repair process is structured in: defect removal (scarfing), cleaning and patch application. Actually, scarfing is a slow process manually performed by human operators, strongly dependent on the workers’ skills, that generates a lot of toxic nano dusts. The present paper deals with the implementation of a (semi)automatic robotic cell to repair large CFRP aeronautic components. The robotic cell is composed by a collaborative robot equipped by a custom scarfing tool due to the context-specific constraints to be implemented. Under these constraints, the machining got is not as perfect as in a traditional milling process, thus requiring additional quality control and inspection after scarfing takes place. The paper describes the designed scarfing tool, examines the trajectories generated to execute the scarfing and illustrates some preliminary results in terms of accuracy of the shape got by the designed tool.
This paper presents a preliminary study for evaluating the quality of welds in thermomagnetic switches using 3D sensing and machine learning techniques. A 3D sensor based on laser triangulation is used to gather the point cloud of the component. The point cloud is then processed to extract hand-crafted signatures for binary classification: defective or non-defective component. Features such as Gaussian and mean curvatures, density, and quadric surface properties, are used for building these significant signatures. Different machine learning models, including decision trees, Support Vector Machines, k-nearest neighbors, random forests, ensemble classifiers, and Artificial Neural Networks, are trained using the built signatures to classify the weld as defective or non-defective. Preliminary results on actual data achieve high classification accuracy (<84%) on all the tested models.
Augmented reality is one of the technologies, which in recent years has been most in the spotlight for communities as diverse as researchers, industrial actors and gamers. A common need in almost any scenarios is to “register” the virtual world with the real one, so that the right virtual objects can be accurately placed in the user's view. Although positioning could be aided by global systems such as GPS, there are situations in which its accuracy or feasibility cannot be guaranteed. Indeed, a few sectors could be prevented from exploring augmented reality as a disrupting technology if this need cannot be adequately fulfilled. In this work, photogrammetry is investigated for scenarios in which a few static already known and well-defined real-world objects can be used for anchoring in a broader area. The goal is to create a solid and reliable augmented reality framework in terms of precise placement of objects with the aim of using it in contexts where other solutions lack the required accuracy. In particular, this work considers as the primary use case a solution developed using Microsoft Hololens 2 for the positioning of digital objects in the context of railway maintenance by exploiting the recognition of real objects in the environment through photogrammetry techniques. Indeed, only a precise positioning of the objects will allow the pervasive diffusion of this technology in sectors such as health, military and in any case in all those contexts where accuracy and reliability are essential elements for ensuring safety of operations.
This paper proposed an efficient method to provide a robust occupancy grid useful for robot navigation tasks. An omnidirectional indoor robot accomplishing logistics tasks, has been equipped with stereocameras for detecting the presence of moving and fixed obstacles. The stereocamera provides a 3D point cloud. Starting from the tridimensional information, the occupancy map can be computed. Nevertheless, the point cloud often owns unstable points mainly due to low accurate disparity map and to light reflections on the floor that produce mismatching during the stereo matching phase. The point cloud has been opportunely filtered by using a cascade approach in order to get more robust occupancy grids. Passthrough filters are applied to remove the too far 3D points. Since high reflective floors produce unwanted 3D points, a color filter is also used to remove those points having saturated intensity values. The remaining floating points related always to the floor are then filtered out by taking advantage of the knowledge about the camera tilt. At this stage, a preliminary 2D occupancy grid is built to sample the point cloud. Each bin of occupancy map is then processed. In case the cell under investigation contains points, a distribution analysis about the point spread is performed. If the height of the highest point is under a determined threshold value, the cell value is set to zero. The unwanted floor points are thus furtherly removed. The cells containing a low number of points are also cleared. Finally, the isolated cells of occupancy grid and the cells that do not have enough valid neighboring cells are reset. The noisy points and the edge points of objects do not concur to produce inaccurate occupancy maps. Final outcomes prove as the proposed methodology enables to provide robust occupancy maps ensuring high performance in terms of processing time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.