Welcome to the IKCEST
Journal
IEEE Computer Graphics and Applications

IEEE Computer Graphics and Applications

Archives Papers: 294
IEEE Xplore
Please choose volume & issue:
IEEE Computer Society D&I Fund
Abstracts:Advertisement.
Role of Intricate Pottery Visualization in Ceramic Manufacturing
Sarah DashtiFiaz HussainFiona CarrollEdmond PrakashAndres Navarro-Newball
Keywords:VisualizationSolid modelingThree-dimensional displaysCeramicsManufactured productsLayered manufacturingThree-dimensional printingartceramicscomputational geometrydata visualisationimage representationimage texturelayered manufacturingpotteryrapid prototyping (industrial)solid modellingaccurate rapid prototypingmodeling prototypingrapid visualizationintricate ceramic potteryartistrapid strideslayered manufacturingceramic manufacturingintricate pottery visualizationvirtual potteryartistsaccurate printable modelsinteractive visualizationlow polygon shape representation
Abstracts:Layered manufacturing, the underlying technology of 3-D printing, has made rapid strides over the last 30 years. We discuss layered manufacturing from the artist’s perspective, especially for intricate ceramic pottery. We contend that opportunities exist for applying visualization to the foremost problems plaguing layered manufacturing. Virtual pottery involves meeting two conflicting constraints: rapid visualization during modeling and accurate rapid prototyping during manufacturing. Artists simultaneously need both low polygon shape representation for interactive visualization and adequate representation for generating accurate printable models. Artists also face the additional complexities of adding surface details that cannot be achieved by hand and handling materials like clay used in the manufacturing of real pottery. Illustrated by a system we have developed that uses sound resonance patterns to create volumetric textures for virtual pottery, we show how visualization helps address both these problem areas.
UTM City—Visualization of Unmanned Aerial Vehicles
Jimmy Johansson WestbergKarljohan Lundin PalmeriusJonas Lundberg
Keywords:VisualizationTraffic controlUrban areasProcess controlAerospace electronicsTraffic controlAutonomous aerial vehicles
Abstracts:In this article, we present a digital platform for unmanned traffic management, UTM City, for research on visualization, simulation, and management of autonomous urban vehicle traffic. Such vehicles orient themselves automatically and provide services ranging from transport to remote presence and surveillance, and new regulations and standards for authorization and monitoring are currently being developed to accommodate for such services. Our system has been developed in close collaboration with domain experts that have contributed with scenarios and participated in numerous workshops to explore the use of visualization in airborne drone traffic monitoring, management, and development of the air space. We share here our experiences with this system and explore the need for visualization in future scenarios to ensure safe, free, and efficient air spaces.
Educational Data Virtual Lab: Connecting the Dots Between Data Visualization and Analysis
Sonsoles López-PernasAndres Munoz-ArcentalesCarlos AparicioEnrique BarraAldo GordilloJoaquín SalvachúaJuan Quemada
Keywords:Electronic learningVisualizationCodesData modelsData visualizationVirtual laboratoriescloud computingcomputer aided instructiondata handlingdata miningdata visualisationinteractive systemseducational Data Virtual LabData visualizationEducational Data Virtual LabEDVLopen-source platformdata explorationinteractive visualization enginecomplete data lifecycledata science methodsdata science skillsData VisualizationEducational StatusPilot Projects
Abstracts:Educational Data Virtual Lab (EDVL) is an open-source platform for data exploration and analysis that combines the power of a coding environment, the convenience of an interactive visualization engine, and the infrastructure needed to handle the complete data lifecycle. Based on the building blocks of the FIWARE European platform and Apache Zeppelin, this tool allows domain experts to become acquainted with data science methods using the data available within their own organization, ensuring that the skills they acquire are relevant to their field and driven by their own professional goals. We used EDVL in a pilot study in which we carried out a focus group within a multinational company to gain insight into potential users’ perceptions of EDVL, both from the educational and operational points of view. The results of our evaluation suggest that EDVL holds a great potential to train the workforce in data science skills and to enable collaboration among professionals with different levels of expertise.
Over the Rainbow: 21st Century Security & Privacy Podcast
Abstracts:Advertisement.
View-Dependent Deformation for 2.5-D Cartoon Models
Tsukasa FukusatoAkinobu Maejima
Keywords:Three-dimensional displaysSolid modelingComputational modelingDeformable modelsGraphicsAnimationcomputer animationrendering (computer graphics)cartoon objectsexaggerationsview-dependent deformation techniques3-D character animationuser-specified 2-Dkey viewsclassic cartoon charactersMickey Mouse earstwo-and-a-half-dimensional cartoon modelsVDD
Abstracts:Two-and-a-half-dimensional (2.5-D) cartoon models are popular methods used for simulating three-dimensional (3-D) movements, such as out-of-plane rotation, from two-dimensional (2-D) shapes in different views without 3-D models. However, cartoon objects and characters have several exaggerations that do not correspond to any real 3-D positions (e.g., Mickey Mouse’s ears), which implies that existing methods are unsuitable for designing such exaggerations. Hence, we incorporated view-dependent deformation (VDD) techniques, which have been proposed in the field of 3-D character animation, into 2.5-D cartoon models. The exaggerations in an arbitrary viewpoint are automatically obtained by blending the user-specified 2-D shapes of key views. Several examples demonstrated the robustness of our method over previous methods. In addition, we conducted a user study and confirmed that the proposed method is effective for animating classic cartoon characters.
Digitizing Wildlife: The Case of a Reptile 3-D Virtual Museum
Savvas ZotosMarilena LemonariMichael KonstantinouAnastasios YiannakidisGeorgios PappasPanayiotis KyriakouIoannis N. VogiatzakisAndreas Aristidou
Keywords:Three-dimensional displaysAnimalsBehavioral sciencesVirtual museumsWildlifeMetadataMotion captureMotion detectionaugmented realitycamerascomputer aided instructioncomputer animationimage colour analysismeta datamuseumsvirtual realitydigitizing wildlifereptile 3-D virtual museumholistic metadata documentationreptile behaviorsdigital counterpartoptical motion capture systemRGB-vision camerasvirtual realityaugmented reality functionalitiesonline repositorynatural environmentanimalsmotion data reusabilityreptilesAnimalsAnimals, WildMuseumsReptilesUser-Computer InterfaceVirtual Reality
Abstracts:In this article, we design and develop a 3-D virtual museum with holistic metadata documentation and a variety of reptile behaviors and movements. First, we reconstruct the reptile‘s mesh in high resolution, and then create its rigged/skinned digital counterpart. We acquire the movement of two subjects using an optical motion capture system, accelerometers, and RGB-vision cameras; these movements are then segmented and annotated to various behaviors. The 3-D environment, virtual reality (VR), and augmented reality (AR) functionalities of our online repository serve as tools for interactively educating the public on animals, which are difficult to observe and study in their natural environment. It also reveals important information regarding animals’ intangible characteristics (e.g., behavior), that is critical for the preservation of wildlife. Our museum is publicly accessible, enabling motion data reusability, and facilitating learning applications through gamification. We conducted a user study that confirms the naturalness and realism of our reptiles, along with the ease of use and usefulness of our museum.
GPU-Accelerated Collision Analysis of Vehicles in a Point Cloud Environment
Harshil ShahSambit GhadaiDhruv GamdhaAlex SchusterIvan ThomasNathan GreinerAdarsh Krishnamurthy
Keywords:Point cloud compressionSolid modelingComputational modelingCollision avoidanceData modelsNavigationGraphics processing unitsVehicle routingVehicle crash testingCADcomputational geometrycomputer graphic equipmentcomputer graphicsgraphics processing unitsmesh generationsolid modellingadaptive collisionefficient collisionGPU-accelerated voxel Minkowski sumclearance analysisGPU implementationGPU-accelerated collision analysispoint cloud environmentGPU-accelerated collision detection methodnavigationVoxelizationPoint CloudCollision DetectionNavigationClearance AnalysisMinkowski Sums
Abstracts:We present a GPU-accelerated collision detection method for the navigation of vehicles in enclosed spaces represented using large point clouds. Our approach takes a CAD model of a vehicle, converts it to a volumetric representation or voxels, and computes the collision of the voxels with a point cloud representing the environment to identify a suitable path for navigation. We perform adaptive and efficient collision of voxels with the point cloud without the need for mesh generation. We have developed a GPU-accelerated voxel Minkowski sum algorithm to perform a clearance analysis of the vehicle. Finally, we provide theoretical bounds for the accuracy of the collision and clearance analysis. Our GPU implementation is linked with Unreal Engine to provide flexibility in performing the analysis.
Keypoint-Based Disentangled Pose Network for Category-Level 6-D Object Pose Tracking
Shantong SunRongke LiuShuqiao SunUnsang Park
Keywords:Three-dimensional displaysFeature extractionPose estimationSolid modelingNeural networksTransformsTraining datacomputer visionfeature extractionleast squares approximationsneural netsobject detectionobject trackingpose estimationsingular value decompositionpose network3-D computer visionneural networkkeypoint-based disentangled pose networkcategory-level 6-D object pose trackingkeypoint-based object pose estimationleast-squares optimization3-D rotation3-D translationsingular value decompositionNOCS-REAL275 datasetAlgorithmsNeural Networks, ComputerPattern Recognition, Automated
Abstracts:Category-level 6-D object pose tracking is very challenging in the field of 3-D computer vision. Keypoint-based object pose estimation has demonstrated its effectiveness in dealing with it. However, current approaches first estimate the keypoints through a neural network and further compute the interframe pose change via least-squares optimization. They estimate rotation and translation in the same way, ignoring the differences between them. In this work, we propose a keypoint-based disentangled pose network, which disentangles the 6-D object pose change to 3-D rotation and 3-D translation. Specifically, the translation is directly estimated by the network and the rotation is indirectly calculated by singular value decomposition according to the keypoints. Extensive experiments on the NOCS-REAL275 dataset demonstrate the superiority of our method.
Predicting Surface Reflectance Properties of Outdoor Scenes Under Unknown Natural Illumination
Farhan Rahman WaseeAlen JoyCharalambos Poullis
Keywords:LightingSurface treatmentGeometryComputational modelingRendering (computer graphics)Neural networksLight sourcesalbedoimage colour analysisimage processinglightingneural netsrealistic imagesreflectivityrendering (computer graphics)reflectance mapslow-parameter reflection modelphenomenological physics-based scattering modelspredicted reflectance properties resultssurface reflectance propertiesoutdoor scenesunknown natural illuminationoutdoor illumination conditionscomplex processcomplete frameworkbidirectional reflectance distribution function incoming lightoutgoing view directionssurface pointsLightingPhotic StimulationSurface Properties
Abstracts:Estimating and modeling the appearance of an object under outdoor illumination conditions is a complex process. This article addresses this problem and proposes a complete framework to predict the surface reflectance properties of outdoor scenes under unknown natural illumination. Uniquely, we recast the problem into its two constituent components involving the bidirectional reflectance distribution function incoming light and outgoing view directions: first, surface points’ radiance captured in the images, and outgoing view directions are aggregated and encoded into reflectance maps, and second, a neural network trained on reflectance maps infers a low-parameter reflection model. Our model is based on phenomenological and physics-based scattering models. Experiments show that rendering with the predicted reflectance properties results in a visually similar appearance to using textures that cannot otherwise be disentangled from the reflectance properties.
Hot Journals