Cerca all'interno del sito
Description
Publications (32)
Abstract: Background: We compare the accuracy of new intraoral scanners (IOSs) in full-arch digital implant impressions. Methods: A master model with six scan bodies was milled in poly(methyl methacrylate), measured by using a coordinate measuring machine, and scanned 15 times with four IOSs: PrimeScan, Medit i500, Vatech EZ scan, and iTero. The software was developed to identify the position points on each scan body. The 3D position and distance analysis were performed. Results: The average and ± standard deviation of the 3D position analysis was 29 μm ± 6 μm for PrimeScan, 39 μm ± 6 μm for iTero, 48 μm ± 18 μm for Mediti500, and 118 μm ± 24 μm for Vatech EZ scan (p < 0.05). Conclusions: All IOSs are able to make a digital complete implant impression in vitro according to the average misfit value reported in literature (150 μm); however, the 3D distance analysis showed that only the Primescan and iTero presented negligible systematic error sources.
Keywords: accuracy | CAD/CAM | dental implant | digital impression | full arch | intra-oral scanner
Abstract: Aircraft seat is rated as the most unsatisfying aspect of flying; understanding the main factors impacting on passenger’s evaluations can provide a concrete opportunity for airlines to improve seat comfort and thus enhance passenger satisfaction and loyalty. Although there is a great deal of interest, the research on effective assessment strategies for subjective comfort is still underdeveloped. In this study a model-based approach for the analysis of subjective comfort data is suggested. The model adopted can be interpreted as a parametric version of the psychological process generating comfort ratings. The proposed approach is exploited through a case study concerning comfort assessment of aircraft seats designed for regional flights.
Keywords: Aircraft seat comfort | Laboratory experiments | Subjective data analysis | Uncertainty
Abstract: The use of 3D digitizing tools is becoming the base for subject-specific products, such as the orthopaedic production process of orthoses and prostheses. This paper aims at comparing the metrological behaviour of low-cost devices (Kinect 1 and 2 by Microsoft, Structure Sensor by Occipital) and high-resolution active sensors (O&P Scan by Rodin4D, NextEngine Ultra HD, Konica Minolta Vivid 9i, GOM ATOS II 400 and Artec Leo) for the survey of human body parts. A calibrated flat plane and a test-field composed of eight calibrated spheres of different radii and placed at different heights were used to evaluate the standard quality parameters (flatness, probing errors in form and size and the standard deviation) for each device as recommended by the VDI/VDE 2634 guidelines. Subsequently, three different parts of a mannequin were surveyed as samples of human body parts. The results demonstrated the higher accuracy of fixed devices with respect to handheld ones, among which Artec Leo and Structure Sensor provided a satisfying level of accuracy for the orthopaedic application. Moreover, the handheld devices enabled performing a fast reconstruction of the mannequin parts in about 20 s, which is acceptable for a person that has to remain as still as possible. For this reason, the Structure Sensor was further tested with five motion approaches which identified that smooth motion provides the lowest deviation and higher reliability. The work demonstrated the appropriateness of handheld devices for the orthopaedic application requirements in terms of speed, accuracy and costs.
Keywords: 3D metrology | Biomedical applications | Human body 3D reconstruction | Low-cost 3D sensors | Probing error standard deviation | Uncertainty
Abstract: Marker-less motion capture (MOCAP) systems based on consumer technology simplify the analysis of movements in several research fields such as industry, healthcare and sports. Even if the marker-less MOCAP systems have performances with precision and accuracy lower than the marker-based MOCAP solutions, their low cost and ease of use make them the most suitable tools for full-body movements analysis. The most interesting category is relative to the use of RGB-D devices. This research work aims to compare the performances of the last two generations of Kinect devices as marker-less MOCAP systems: Microsoft Kinect v2 and Azure devices. To conduct the tests, a list of specific movements is acquired and evaluated. This work measures the improvements of the Azure in tracking human body movements. The gathered results are presented and discussed by evaluating performances and limitations of both marker-less MOCAP systems. Conclusions and future developments are shown and discussed.
Keywords: Accuracy | Kinect Azure | Kinect V2 | Marker-less MOCAP systems
Abstract: In the last years, the advent of low-cost markerless motion capture systems fostered their use in several research fields, such as healthcare and sport. Any system presents benefits and drawbacks that have to be considered to design a Mocap solution providing a proper motion acquisition for a specific context. In order to evaluate low-cost technology, this research work focuses on the evaluation of the accuracy of two categories of devices: The RGB active cameras and the RGB-D, or depth sensors devices. In particular, GoPro Hero 6 active cameras and Microsoft Kinect v2 devices have been selected as representative of the two categories. In particular, this work evaluates and compares the performances of the two systems used to track the position of human articulations. The two devices have been chosen among those available on the market after a state of the art has been completed. Before starting with the campaign of acquisition, the number of sensors and their layout have been designed to optimize the acquisition with both mark-less Mocap systems. Their comparison is based on a list of specific movements of upper and lower limbs. Each movement has been acquired simultaneously, to guarantee the same test conditions. The results have been organized, compared and discussed by evaluating performances and limitations of both solutions related to specific context of use. Conclusions highlight the best candidate technology..
Keywords: Accuracy | active cameras | GoPro | Markerless Mocap systems | Microsoft Kinect
Abstract: Aims: The purpose of the study was to evaluate the accuracy of a three-dimensional (3D) automated technique (computer-aided design (aCAD)) for the measurement of three canine femoral angles: anatomical lateral distal femoral angle (aLDFA), femoral neck angle (FNA) and femoral torsion angle. Methods:Twenty-eight femurs equally divided intotwo groups (normal and abnormal) were obtained from 14 dogs of different conformations (dolicomorphic and chondrodystrophicCT scans and 3D scanner acquisitions were used to create stereolithographic (STL) files, which were run in a CAD platform. Two blinded observers separately performed the measurements using the STL obtained from CT scans (CT aCAD) and 3D scanner (3D aCAD), which was considered the gold standard method. C orrelation coefficients were used to investigate the strength of the relationship between the two measurements. Results: A ccuracy of the aCAD computation was good, being always above the threshold of R 2 of greater than 80 per cent for all three angles assessed in both groups. a LDFA and FNA were the most accurate angles (accuracy >90 per cent). Conclusions: The proposed 3D aCAD protocol can be considered a reliable technique to assess femoral angle measurements in canine femur. The developed algorithm automatically calculates the femoral angles in 3D, thus considering the subjective intrinsic femur morphology. The main benefit relies on a fast user-independent computation, which avoids user-related measurement variability. The accuracy of 3D details may be helpful for patellar luxation and femoral bone deformity correction, as well as for the design of patient-specific, custom-made hip prosthesis implants.
Keywords: 3D computation | accuracy | dogs | femur
Abstract: 3D reconstruction of human anatomy from cross-sectional imaging has recently gained increasing importance in several medical fields thus designating the 3D bones reconstruction accuracy, critical for the success of the whole surgical intervention. The 3D anatomic model quality depends on the quality of the reconstructed image, on the quality of the images segmentation step and on the error introduced by the iso-surface triangulation algorithm. The influence of image processing procedures and relative parametrization has been largely studied in the scientific literature; however, the analysis of the direct impact of the quality of the reconstructed medical images is still lacking. In this paper, a comparative study on the influence of both image reconstruction algorithm (standard and iterative) and applied kernel is reported. Research was performed on the 3D reconstruction of a pig tibia, by using Philips Brilliance 64 CT scanner. At the stage of scanning and at the stage of 3D reconstruction, the same procedures were followed, while only image reconstruction algorithm and kernel were changed. The influence of such selection on the accuracy of bone geometry was assessed by comparing it against the 3D model obtained with a professional 3D scanner. Results show an average error in reconstructing the geometry of around 0.1 mm with a variance of 0.08 mm. The presented study highlights new opportunities to control the deviations on the geometry accuracy of the bones structures at the stage of cross sectional imaging generation.
Keywords: 3D model reconstruction | Accuracy | Computed tomography | Kernel reconstruction
Abstract: Purpose: Compare the accuracy of intraoral digital impression in full-arch implant-supported fixed dental prosthesis acquired with eight different intraoral scanner (Ios). Methods: A polymethyl methacrylate acrylic model of an edentulous mandible with six scan-abutment was used as a master model and its dimensions measured with a coordinate measuring machine. Eight different Ios were used to generate digital impression: True Definition, Trios, Cerec Omnicam, 3D progress, CS3500, CS3600, Planmeca Emelard and Dental Wings. Fifteen digital impressions were made. A software called “Scan-abut” was developed to analyse and compare the digital impression with the master model, obtaining the scanning accuracy. The three-dimensional (3D) position and distance analysis were performed. Results: Mean value of the 3D position analysis showed that the True Definition (31 μm ± 8 μm) and Trios (32 μm ± 5 μm) have the best performance of the group. The Cerec Omnicam (71 μm ± 55 μm), CS3600 (61 μm ± 14 μm) have an average performance. The CS3500 (107 μm ± 28 μm) and Planmeca Emelard (101 μm ± 38 μm) present a middle-low performance, while the 3D progress (344 μm ± 121 μm) and Dental Wings (148 μm ± 64 μm) show the low performance. The 3D distance analysis showed a good linear relationship between the errors and scan-abutment distance only with the True Definition and CS3600. Conclusions: Not all scanners are suitable for digital impression in full-arch implant-supported fixed dental prosthesis and the weight of the output files is independent from the accuracy of the Ios.
Keywords: Accuracy | Dental implant | Digital impression | Full arch | Intraoral scanner
Abstract: This paper proposes a replicable methodology to enhance the accuracy of the photogrammetric reconstruction of large-scale objects based on the optimization of the procedures for Unmanned Aerial Vehicle (UAV) camera image acquisition. The relationships between the acquisition grid shapes, the acquisition grid geometric parameters (pitches, image rates, camera framing, flight heights), and the 3D photogrammetric surface reconstruction accuracy were studied. Ground Sampling Distance (GSD), the necessary number of photos to assure the desired overlapping, and the surface reconstruction accuracy were related to grid shapes, image rate, and camera framing at different flight heights. The established relationships allow to choose the best combination of grid shapes and acquisition grid geometric parameters to obtain the desired accuracy for the required GSD. This outcome was assessed by means of a case study related to the ancient arched brick Bridge of the Saracens in Adrano (Sicily, Italy). The reconstruction of the three-dimensional surfaces of this structure, obtained by the efficient Structure-From-Motion (SfM) algorithms of the commercial software Pix4Mapper, supported the study by validating it with experimental data. A comparison between the surface reconstruction with different acquisition grids at different flight heights and the measurements obtained with a 3D terrestrial laser and total station-theodolites allowed to evaluate the accuracy in terms of Euclidean distances.
Keywords: Accuracy | Acquisition grid optimization | Digital surfaces models | Ground sampling distance | Structure-from-motion algorithms
Abstract: Purpose: This study describes a method for measuring the accuracy of the virtual impression. Methods: In vitro measurements according to a metrological approach were based on (1) use of an opto-mechanical coordinate measuring machine to acquire 3D points from a master model, (2) the mathematical reconstruction of regular geometric features (planes, cylinders, points) from 3D points or an STL file, and (3) consistent definition and evaluation of position and distance errors describing scanning inaccuracies. Two expert and two inexpert operators each made five impressions. The 3D position error, with its relevant X, Y, and Z components, the mean 3D position error of each scanbody, and the intra-scanbody distance error were measured using the analysis of variance and the Sheffe’s test for multiple comparison. Results: Statistically significant differences in the accuracy of the impression were observed among the operators for each scanbody, despite the good reliability (Cronbach’s α = 0.897). The mean 3D position error of the digital impression was between 0.041 ± 0.023 mm and 0.082 ± 0.030 mm. Conclusions: Within the limitations of this in vitro study, which was performed using a single commercial system for preparing digital impressions and one test configuration, the data showed that the digital impressions had a level of accuracy comparable to that reported in other studies, and which was acceptable for clinical and technological applications. The distance between the individual positions (#36 to #46) of the scanbody influenced the magnitude of the error. The position error generated by the intraoral scanner was dependent on the length of the arch scanned. Operator skill and experience may influence the accuracy of the impression.
Keywords: Accuracy | CAD–CAM | Digital impression | Opto-mechanical measuring
Abstract: This paper presents a new methodology whose goals are on the one hand the formulation of a tolerance specification that is consistent with the functional, technological and control needs and, on the other, the automatic control of tolerance. The key aspect of the methodology is the digital model of the product, referred to as GMT (Geometric Model of Tolerancing), which gives a complete, consistent and efficient description of its geometrical and dimensional properties with the aim of being able to specify, simulate, manufacture and inspect them. By means a real test case, the potentialities of a first implementation of the proposed methodology are critically discussed.
Keywords: CAT (Computer-Aided Tolerancing) | GD&T (Geometric Dimensioning and Tolerancing) | Geometric inspection | GPS (Geometric Product Specification)
Abstract: A new method for secondary features segmentation, performed in highdensity acquired geometric models, is proposed. Four types of secondary features are considered: fillets, rounds, grooves and sharp edges. The method is based on an algorithm that analyzes the principal curvatures. The nodes, potentially attributable to a fillet of given geometry, are those with a certain value for maximum principal curvature. Since the deterministic application of this simple wor king principle shows several problems due to the uncertainties in the curvature estimation, a fuzzy approach is proposed. In order to segment the nodes of a tessellated model that pertain to the same secondary features, proper membership functions are evaluated as function of some parameters, which affect the quality of the curvature estimation. A region growing algorithm connects the nodes pertaining to the same secondary feature. The method is applied and verified for some test cases.
Keywords: Computational geometry | Features extractions | Fuzzy logic | Mechanical engineering computing | Region growing algorithm
Abstract: Underwater manipulation is an essential operation for performing a diverse range of applications in the submerged environment that, in spite of the hostile and unstructured environment, it requires high precision and reliability of the robotic arm. The paper presents the evaluation and characterization of the kinematic performances of an underwater robotic arm mounted on a light work class ROV. The arm analyzed in the study is a re-engineered version of a commercial hydraulic manipulator whose geometry and end-effector have been modified. Moreover, the arm has been equipped with a set of encoders in order to provide the positioning feedback. The test conducted in laboratory focused on the measurement of accuracy and repeatability in order to evaluate the limits of the arm architecture. This work has been carried out in the context of the CoMAS (In situ conservation planning of Underwater Archaeological Artifacts - http://www.comasproject.eu) project in which the possibility to develop a ROV able to perform maintenance operations in underwater archeological sites has been investigated.
Keywords: accuracy | forward kinematics | repeatability | robotic arm | underwater manipulator
Abstract: In this paper, the problem of the evaluation of the uncertainties that originate in the complex design process of a new system is analyzed, paying particular attention to multibody mechanical systems. To this end, the Wiener-Shannon's axioms are extended to non-probabilistic events and a theory of information for non-repetitive events is used as a measure of the reliability of data. The selection of the solutions consistent with the values of the design constraints is performed by analyzing the complexity of the relation matrix and using the idea of information in the metric space. Comparing the alternatives in terms of the amount of entropy resulting from the various distribution, this method is capable of finding the optimal solution that can be obtained with the available resources. In the paper, the algorithmic steps of the proposed method are discussed and an illustrative numerical example is provided.
Keywords: Complexity | Design | Fair division | Multibody systems | Non-probabilistic entropy | Uncertainty
Abstract: In this paper we analyze the capabilities of a routine, based on Fuzzy logic, for elaborating a data set coming from a CMM (Coordinate Measuring Machine). We will show how to obtain, during holes measuring, the best measure, so that the approximation error is minimized. Moreover the CMM on-board software can elaborate these data and select the mathematical representation of the stored data, by identifying quotes, measures, axes, diameters, tolerances and so on. Information on measured parts is usually elaborated by an algorithm based on the least square squared error method, in order to evaluate the good shape of the hole; our purpose is to propose a new kind of approach, based on the Inferential Fuzzy system method, both to reduce the number of measured points, and to obtain the same accuracy. Our approach enables to measure the holes with a number of points lower than those usually needed for the CMM software. Thus time spent for obtaining a good measure is significantly reduced.
Keywords: Accuracy | Coordinate Measuring Machine | Fuzzy inference | Holes measurement | Precision
Abstract: Purpose: The study aims to evaluate three-dimensionally (3D) the accuracy of implant impressions using a new resin splinting material, "Smart Dentin Replacement" (SDR). Materials and Methods: A titanium model of an edentulous mandible with six implant analogues was used as a master model and its dimensions measured with a coordinate measuring machine. Before the total 60 impressions were taken (open tray, screw-retained abutments, vinyl polysiloxane), they were divided in four groups: A (test): copings pick-up splinted with dental floss and fotopolymerizing SDR; B (test): see A, additionally sectioned and splinted again with SDR; C (control): copings pick-up splinted with dental floss and autopolymerizing Duralay® (Reliance Dental Mfg. Co., Alsip, IL, USA) acrylic resin; and D (control): see C, additionally sectioned and splinted again with Duralay. The impressions were measured directly with an optomechanical coordinate measuring machine and analyzed with a computer-aided design (CAD) geometric modeling software. The Wilcoxon matched-pair signed-rank test was used to compare groups. Results: While there was no difference (p=430) between the mean 3D deviations of the test groups A (17.5μm) and B (17.4μm), they both showed statistically significant differences (p<.003) compared with both control groups (C 25.0μm, D 19.1μm). Conclusions: Conventional impression techniques for edentulous jaws with multiple implants are highly accurate using the new fotopolymerizing splinting material SDR. Sectioning and rejoining of the SDR splinting had no impact on the impression accuracy.
Keywords: Accuracy | Edentulous jaw | Implant impression technique | Impression copings | Passive fit | Splinting material
Abstract: The increasing integration between electronics and mechanical engineering brings to the industrial market very hi-tech sensors, often non-linear, capable of more than a single input and single output. A problem more and more relevant for sensors like these is calibration. Classic linear calibration procedures, when applied to this extremely engineered sensors, lead to poor accuracy and are generally not satisfactory. The case study is the calibration of a bi laser based position sensor, in particular a positive sensitive detector, that is an optical position transducer based on series of photodiodes commonly used as multidimensional sensor. To perform the calibration a micrometric positioning table was used to test the whole photodiode active area in both directions. The sensor studied showed a very linear behaviour in the central region of the working range, and a limited nonlinearity closer to the range limits and was to be used to verify robot movement capabilities; to reduce uncertainty associated with nonlinearities, a set of nonstandard, non-linear, calibrations were performed, pointing out residual values in order to compare different algorithms. In a previous work, authors have already tested a linear model against an algorithm based on radial basis functions (RBF) and Nelder-Mead simplex method. Object of this paper is the definition of a procedure based on RBF and genetic algorithms for multi-dimensional interpolation of data cloud and a comparison between this updated procedure results and the ones of the previous studied algorithms. The reference model for calibration was a black box with two inputs, X and Y position of the laser spot, and two outputs, voltages Vx and Vy, while the calibration procedure was split in two separate layers, one for each output depending on both inputs. Given N data points in a M-dimensional environment and N values that represent the non linearity residual, purpose of the algorithm is to approximate a data cloud with a real function, that is represented as a sum of a polynomial (linear) part and I. radial basis functions, each associated with a different center (node) and weighted by an appropriate coefficient, that the procedure also allow to assess. When no starting guess for nodes are given in input, nodes coordinates are the output of a non-linear optimizer based on a genetic algorithm, whose goal is to locally minimize the objective function. The algorithm stops itself whenever it reaches a certain tolerance level, a user specified number of nodes or when the previous iteration has a better value of the objective function. This study has been performed for various RBF classes, and shows an increased accuracy, thus a better metrological behaviour, with respect to the standard linear (planar) calibration model traditionally used.
Keywords: Calibration | Genetic algorithm | Multi input | Radial basis function | Uncertainty
Abstract: In the last few years the need for methodologies capable of performing an automated geometric inspection has increased. These methodologies often use 3D high-resolution optical digitisers to acquire points from the surface of the object to be inspected. It is expected that, in the near future, geometric inspection will be requiring more and more the use of these instruments. At present geometric inspection is not profiting from all the opportunities attainable by 3D high-resolution optical scanners or from the numerous tools which can be used for processing the point cloud acquired from the inspected product. For some years now, these authors have been working on a new methodology for automatic tolerance inspection working from a 3D model acquired by optical digitisers. In this paper all the information recognisable in a scanned object is organised into a new data structure, called Recognised Geometric Model (RGM). The final aim is to define a representation of the inspected object for the automatic evaluation of the non-idealities pertaining to the form, orientation and location of the non-ideal features of the acquired object. The key concept of the proposed approach is the capability to recognise some intrinsic nominal properties of the acquired model. These properties are assumed as references to evaluate the non-idealities of the inspected object. With this approach the references of geometric inspection are searched for in the inspected object independently of a tolerance specification and of the availability of a 3D nominal representation. The high-level geometric information within RGM depends on the rules used for its identification. The capability to recognise specific categories of nominal references offers the possibility of introducing new tolerances to be specified. The proposed approach has been implemented in original software by means of which a specific test case has been analysed. © 2012 Springer-Verlag France.
Keywords: Automated inspection | ISO tolerancing | Three-dimensional metrology
Abstract: In mechanical design, geometrical specifications and dimensional tolerances are commonly used to avoid final product malfunction and to allow for assembly integration. Geometric specification usage, in particular, has many manufacturing and durability implications, the feasibility of their measurement and verification, however, is often neglected and the influence of measurement uncertainty in their evaluation underestimated. Often geometrical specifications are defined without considering measurement uncertainties, or measurability at all: it is not uncommon to find approved specifications prescribing unverifiable geometry, or dimension tolerances that exceed state-of-art measurements. This article explores the case study of orthogonality between a circular hole and the plane on which it is drilled, evaluated using a Coordinate Measuring Machine. Such specification is defined, according to ISO 14253, as the angle between the plane normal and cylinder axis. Uncertainty of points coordinates obtained can, however small, play a key role in the final evaluation of orthogonality: if the specified tolerance is thigh enough it is also possible to have misalignment uncertainty higher than the tolerance itself. The authors propose the results of a mathematical and numerical model, meant to help the designer to define specification to assess the relationship between cylinder-plane misalignment measurability, CMM uncertainty and features dimensions. © 2013 EDP Sciences.
Keywords: Geometric specification | Measurability | Orthogonality | Tolerances verification | Uncertainty
Abstract: The increasing integration between electronics and mechanical engineering brings to the industrial market very hi-tech sensors, often non-linear, capable of more than a single input and single output. A problem more and more relevant for sensors like these is calibration. Classic linear calibration procedures, when applied to this extremely engineered sensors, lead to poor accuracy and are generally not satisfactory. The case study is the calibration of a bi-dimensional laser based position sensor, in particular a positive sensitive detector, that is an optical position transducer based on series of photodiodes commonly used as multidimensional sensor. To perform the calibration a micrometric positioning table was used to test the whole photodiode active area in both directions. The sensor studied showed a very linear behaviour in the central region of the working range, and a limited nonlinearity closer to the range limits and was to be used to verify robot movement capabilities; to reduce uncertainty associated with nonlinearities, a set of nonstandard, non-linear, calibrations were performed, pointing out residual values in order to compare different algorithms. In a previous work, authors have already tested a linear model against an algorithm based on radial basis functions (RBF) and Nelder-Mead simplex method. Object of this paper is the definition of a procedure based on RBF and genetic algorithms for multi-dimensional interpolation of data cloud and a comparison between this updated procedure results and the ones of the previous studied algorithms. The reference model for calibration was a black box with two inputs, X and Y position of the laser spot, and two outputs, voltages Vx and Vy, while the calibration procedure was split in two separate layers, one for each output depending on both inputs. Given N data points in a M-dimensional environment and N values that represent the non linearity residual, purpose of the algorithm is to approximate a data cloud with a real function, that is represented as a sum of a polynomial (linear) part and L radial basis functions, each associated with a different center (node) and weighted by an appropriate coefficient, that the procedure also allow to assess. When no starting guess for nodes are given in input. nodes coordinates are the output of a non-linear optimizer based on a genetic algorithm, whose goal is to locally minimize the objective function. The algorithm stops itself whenever it reaches a certain tolerance level, a user- specified number of nodes or when the previous iteration has a better value of the objective function. This study has been performed for various RBF classes, and shows an increased accuracy, thus a better metrological behaviour, with respect to the standard linear (planar) calibration model traditionally used.
Keywords: Calibration | Genetic algorithm | Multi input | Radial basis function | Uncertainty
Abstract: In a previous paper (Di Angelo, L., Di Stefano, P. and Morabito, A., 2011. Automatic evaluation of form errors in high-density acquired surfaces. International Journal of Production Research, 49 (7), 2061-2082) we proposed an original methodology for the automation of the geometric inspection, starting from an acquired high-density surface. That approach performed a recognition process on the acquired data aiming at the identification of some intrinsic nominal references. An intrinsic nominal reference was detected when a geometric property was recognised to be common to a set of adjacent points in the 3D data set representing the acquired object. The recognition of these properties was carried out based on some rules. Starting from these concepts, a new specification language was defined, which is based on recognisable geometric entities. This paper expands the category of intrinsic nominal references to include new mutual intrinsic orientation, location and dimensional properties pertaining to 3D features. This approach involves the automatic construction of a geometric reference model for a scanned workpiece, called recognised geometric model (RGM). The domain of the representable entities within the RGM strictly depends on the rules used for the recognition of the intrinsic properties. In particular, this paper focuses on the rules for the recognition of the orientation and location properties between non-ideal features. When using the RGM, tolerances are specified according to the set of available and recognisable intrinsic nominal references. Based on the geometric product specification, the RGM data structure can be queried to capture some quantitative information concerning special intrinsic geometric parameters and/or non-idealities. © 2012 Copyright Taylor and Francis Group, LLC.
Keywords: automated inspection | ISO tolerancing | shape recognition | three dimensional metrology | triangular meshes
Abstract: This article proposes a mathematical approach for the definition of a multiple input calibration diagram, based on a two step sequence, iteratively defining a set of radial basis functions and fitting its parameter to experimental data. Different radial basis functions have been evaluated and compared, focusing on Gaussian and multi-quadratic elements. The case studied used to test this method is the calibration of a bi-dimensional laser based position sensor believe that the method could be generalized and applicable to all transducers presenting multiple outputs, multiple inputs and localized non-linearities.
Keywords: Calibration | Multi input | Radial basis function | Uncertainty
Abstract: In this paper the authors present an original methodology aiming at the automation of the geometric inspection, starting from a high-density acquired surface. The concept of intrinsic nominal reference is herein introduced in order to evaluate geometric errors. Starting from these concepts, a new specification language, which is based on recognisable geometric entities, is defined. This work also proposes some surface differential properties, such as the intrinsic nominal references, from which new categories of form errors can be introduced. Well-defined rules are then necessary for the unambiguous identification of these intrinsic nominal references. These rules are an integral part of the tolerance specification. This new approach requires that a recognition process be performed on the acquired model so as to automatically identify the already-mentioned intrinsic nominal references. The assessable errors refer to recognisable geometric entities and their evaluation leaves the nominal reference specification aside since they can be intrinsically associated with a recognised geometric shape. Tolerance specification is defined based on the error categories which can be automatically evaluated and which are an integral part of the specification language. © 2011 Taylor & Francis.
Keywords: automated inspection | form error evaluation | GPS tolerancing
Abstract: The core of the paper is focused on the experimental characterization of four different 3D laser scanners based on Time of Flight principle, through the extraction of resolution, accuracy and uncertainty parameters from specifically designed 3D test objects. The testing process leads to four results: z-uncertainty, xy-resolution z-resolution and z-accuracy. The first is obtained by the evaluation of random residuals from the 3D capture of a planar target, the second from the scanner response to an abrupt z-jump, and the last two from direct evaluation of the image extracted by different geometric features progressively closer each other. The aim of this research is to suggest a low cost characterization process, mainly based on calibrated test object easy to duplicate, that allow an objective and reliable comparison between 3D TOF scanner performances. © 2011 SPIE-IS&T.
Keywords: Accuracy | Characterization | Precision | Range sensor | Resolution | Standardization | TOF laser scanner
Abstract: The performance of 2D digital imaging systems depends on several factors related with both optical and electronic processing. These concepts have originated standards, which have been conceived for photographic equipment and bi-dimensional scanning systems, and which have been aimed at estimating different parameters such as resolution, noise or dynamic range. Conversely, no standard test protocols currently exist for evaluating the corresponding performances of 3D imaging systems such as laser scanners or pattern projection range cameras. This paper is focused on investigating experimental processes for evaluating some critical parameters of 3D equipment, by extending the concepts defined by the ISO standards to the 3D domain. The experimental part of this work concerns the characterization of different range sensors through the extraction of their resolution, accuracy and uncertainty from sets of 3D data acquisitions of specifically designed test objects whose geometrical characteristics are known in advance. The major objective of this contribution is to suggest an easy characterization process for generating a reliable comparison between the performances of different range sensors and to check if a specific piece of equipment is compliant with the expected characteristics. © 2010 by the authors.
Keywords: 3D measurement | Accuracy | Laser scanner | Metrological characterization | Pattern projection | Resolution | Uncertainty
Abstract: Purpose - The purpose of this paper is to investigate a method for comparing the scanning and reproducing accuracy of highly shaped objects like plaster casts used in dentistry. Design/methodology/approach - Theoretical considerations on errors introduced by the scanning systems and subsequent point clouds data elaboration have led to a method to estimate the accuracy of the whole process. Suitable indices have been chosen and computed at each stage. As a final result, the overall chain, scanning and reproducing systems can be assessed. In order to validate the proposed method casts have been scanned by means of commercial systems and then reproduced by using different rapid prototyping technologies, materials and parameters. Error indices have been computed and reported. Findings - Since it is not possible to define reliable and meaningful reference models for non-standard shapes, an absolute accuracy value for the scanning process cannot be stated. Anyway the proposed method, thanks to relative performance indices, allows the comparison of different acquisition systems and the evaluation of the most performing manufacturing chain. Practical implications - The study provides a method to assess the relative performance between commercial systems both in scanning and reproducing stage. Originality/value - In literature, some studies on the accuracy of scanning devices have been found but they are based on standard geometrical features. In this paper, the problem of complex shapes in absence of reference model is addressed instead. Copyright © Emerald Group Publishing Limited [ISSN 1355-2546].
Keywords: Accuracy | Dentistry | Rapid prototypes | Structural analysis
Abstract: While parallel kinematics micropositioners offer high repeatability in positioning with respect to their own reference frame, the accuracy of absolute positioning with respect to a second, fixed, reference frame largely depends on the knowledge of the relative position between the two. Here will be presented a method for a rapid estimation of these relative positions based on subsequent movements of a positioner under a CMM and evaluation of position uncertainty.
Keywords: Accuracy measurement | Hexapod | Positioning | Uncertainty
Abstract: Resolution analysis represents a 2D imaging topic for the use of particular targets for equipment characterization. These concepts can be extended in 3D imaging through the use of specific tridimensional target object. The core of this paper is focused on experimental characterization of seven different 3D laser scanner through the extraction of resolution, accuracy and uncertainly parameters from 3D target object. The process of every single range map defined by the same resolution leads to different results as z-resolution, optical resolution, linear and angular accuracy. The aim of this research is to suggest a characterization process mainly based on resolution and accuracy parameters that allow a reliable comparison between 3D scanner performances. © 2009 SPIE-IS&T.
Keywords: 3D Scanner | Accuracy | Characterization | Resolution | SFR | Target Object
Abstract: Incremental Forming is a flexible and innovative sheet metal forming process able to form complex shapes without the need of any expensive die. In this way, expensive fixtures are avoided obtaining a cheaper production, more advantageous for small production batches. Anyway, more than the process slowness, the geometrical accuracy represents the most important drawback today. In particular, two kinds of geometrical errors can be observed on a sheet component incrementally formed: the presence of elastic springback that modifies the imposed final depth, that "moves away" from the designed one, and the undesired bending effect of the sheet, which undergoes to the punch action. Several studies which tend to optimise the equipment and/or the tool path, in order to reduce the profile diverting, were executed. In this paper, an experimental investigation was carried out in order to test and introduce a new approach able to solve the above problem. More in detail, the test were executed applying an additional backdrawing phase, after the conventional negative deformation. Different testing conditions were evaluated during the experimental campaign and critically compared in the analysis. © Springer/ESAFORM 2009.
Keywords: Accuracy | Incremental sheet forming | Sheet metal forming
Abstract: In this paper a statistical method for tolerances analysis and cost evaluation is presented. Statistical tolerances analysis is performed by assuming the Chase and Greenwood mean shift model, providing an original systematic approach to evaluate the mean shift factor. The proposed approach allows for the analysis of manufacturing costs for different confidence levels of the variables to be optimised. The interaction between customer requirements, design parameters and process variables, are taken into account by incorporating the process planning models and conceptual and embodiment design solutions. The cost analysis is performed by selecting the most appropriate driver for each component of the cost due tolerances. Cost driver parameters are evaluated over product life cycle, in different design domains (customer, design and process) and also include the customer satisfaction (quality loss).
Keywords: Cost analysis | Mean shift model | Tolerance analysis
Abstract: Tolerance design plays an important role in the modern design process by introducing quality improvements and limiting manufacturing costs. In this paper a method for statistical tolerance analysis and synthesis is presented. This method is implemented using the mean shift model of Chase and Greenwood, providing a systematic approach to evaluate the mean shift model method considers all the principal factors that affect the statistical sum of a certain number of assembly dimensions. In particular, the considered factors include the mean shift ratio, the confidence level, the number of dimension of the assembly and the tolerance assortment between the component dimensions. An implementation of the mean shift model for tolerance synthesis is described. The tolerances synthesis is performed in an unusual way, taking into account in the optimization process the typical parameters that affect the product variability. For this purpose the method uses four types of condition for the dimensional tolerances: fixed tolerance value, fixed mean shift ratio, fixed mean shift and fixed natural variability. Furthermore, in the optimization process, the service variability is considered under two conditions: fixed and valuable service variability. A case study is presented and the results of some simulations are discussed.
Keywords: Mean shift model | Tolerance analysis and synthesis
Abstract: In this paper is analysed the problem, using soft models, of soft dependence of parameters in design systems. A new form of computing, called Soft Computing, is recently used in many emerging disciplines because it is tolerant to imprecision, uncertainty and partial truth. The Soft Computing uses many disciplines as Bayesian inference and maximum entropy method. The logic relationship that ties the different elements can be defined more easily using the axioms of soft design emanating from MinEnt principle. The fundamental axiom of design is: valid design has minimum values of information and depends on a finite and limited number of independent, or soft dependent, parameters. © 2002 Published by Elsevier Science B.V.
Keywords: Axiomatic design | Entropy | MaxEnt | Soft Computing | Soft design | Uncertainty
Special Issue "Recent Advances in Smart Design and Manufacturing Technology"
Special Issue "Applications of 3D High-Resolution Optical Digitizers in Industrial Products"
Special Issue "3D Sensing and Imaging for Biomedical Investigations"
Special Issue "Automated Product Inspection for Smart Manufacturing"
Special Issue "Modeling, Testing and Applications of Metallic Foams and Cellular Materials"