본문 바로가기 대메뉴 바로가기
open
Close

Research

Past Research

게시물 검색
Hot rolled sheet real-time image analysis algorithm development using deep learning 이미지
Hot rolled sheet real-time image analysis algorithm development using deep learning
  • 작성자Artificial Intelligence Laboratory
  • 조회수1
  • OverviewDevelopment of passing steel sheet status classification algorithm from images obtained through the camera in the rolling process during the steel production processImage classification algorithm based on Convolutional Neural NetworkStatus classification for each of D/S and W/S areas of sheet  Key Researches- Real-time passing steel sheet status classification algorithm development and accuracy improvement  - Development of methods to overcome labeling human error for supervised learning- Supervised learning for classification model using imbalance dataset  Project Explaination  This task is to develop artificial intelligence on behalf of the people who observe the status of passing steel sheet in the rolling process during the steel production process  Since the rolling process has high temperature and high humidity, it is difficult to attach the sensor at a short distance, so a method of collecting images with cameras from a distance and observing the status of passing steel sheet in real time by using computer vision has been proposed.   To train the deep neural network through supervised learning, a process of labeling the images collected from cameras is required. Because it is not a simple classification task, it is a problem of classifying the level according to their degree, so human labeling contains a lot of errors. This labeling error should be overcome because it can be confusing for learning.   We have achieved results above the target of 80% and have developed algorithms that enable real-time classification.Project DetailDonation: PoscoTerm: 2019. 08. 15 ~ 2020. 03. 31 (7 months)Expense: 70,000,000 WON Contact- Name: JunHo Yun, Master Course- Mail: yjh960314@gist.ac.kr
  • 등록일2020-07-07 13:55:54
Developed intelligent UI/UX technology for AR glasses-based docent operation 이미지
Developed intelligent UI/UX technology for AR glasses-based docent operation
  • 작성자Artificial Intelligence Laboratory
  • 조회수6
  • OverviewDevelopment of UX optimized exhibition space for wearing augmented reality (AR)Space-based storytelling technology and emotional recognition (AI) technologyVirtual Character Docent Service Technology Key Researches- Recognition of gestures and developing viewer-centered gestures NUX- Developing chatbots and implementing information provision functions for audience conversation recognition- Development of a real-time wireless indoor visitors' information monitoring system based on UWB sensors- Development of a production tool for AR docents in museums Project Explaination In this project, the exhibition space UX technology optimized for wearing augmented reality, space-based storytelling technology, and viewer sentiment recognition AI technology are designed to organize optimal information for individual visitors and to develop a service technology guided by virtual character docents through wearable augmented reality glass.We demonstrated optimized docent guidance service through wearable AR glass for visitors to the Shilla Hall of the National Museum of Korea. The artificial intelligence lab of Gwangju Institute of Science and Technology develops a real-time visiting behavior monitoring system by measuring the indoor location of the audience.The ultra-wideband (UWB) radar sensor is used to establish a wireless indoor positioning system and to develop a real-time viewer position, movement, and retention time monitoring system by relics.The goal is to develop AR docent service optimized for visitors by developing viewing characteristics analysis software by age, gender, etc. Based on real-time UWB measurement information in the future, we want to study customized service delivery technology that can actively provide appropriate services by analyzing characteristics of measured targets with AI. Publications- 이주순, 서호건, 이규빈. "초광대역 레이터를 이용한 관람 행테 분석 시스템". 스마트미디어저널. 2019, vol.8, no.4, pp. 85-90. - Joosoon Lee, Hogeon Seo, Kyoobin Lee. "Analysis of Museum Patrons' Behavior using Ultra-Wide Band Radar based Tracking System". The 8th International Conference on Smart Media & Application. 2019Project DetailEnterprize: 2017 Support Project for Research and Development of Cultural Technology Project: Developed intelligent UI/UX technology for AR glasses-based docent operationDonation: Ministry of Culture, Sports and Tourism / Korea Creative Content AgencyTerm: 01/04/2017 ~ 31/12/2019 (33 months)Expense: 2,000,000,000\Consortium: Gwangju Institute of Science and Technology, Korean Culture Technology Institute, VIRNECT, Perpect-Storm, Sampartners, TILON Contact- Name: Joosoon Lee, integrated Ph.D. program- Mail: joosoon1111@gist.ac.kr
  • 등록일2020-06-25 10:31:36
Sim-to-Real Deep Reinforcement Learning for Visuomotor of Robots 이미지
Sim-to-Real Deep Reinforcement Learning for Visuomotor of Robots
  • 작성자Artificial Intelligence Laboratory
  • 조회수2
  • OverviewDevelopment of UX optimized exhibition space for wearing augmented reality (AR)Develop a Deep RL algorithm to generate policy to perform basic robotic task in simulation.Transfer the trained models to real world Key Researches- Deep Reinforcement Learninghttps://www.youtube.com/watch?v=o0DX9Kk0oCo- Sim to real(Domain Randomize)- Object Detection & pose estimation Project Explaination  To training policy for robotic tasks like pick and place and grasping, we choose Sim-to-real method for training models. We construct the simulation environment for generate synthetic data. Collect synthetic images using simulation and use them to train models that detect target object from various objects and estimate target object's x, y position related to center of table. To transfer trained models to real world, we generate synthetic image using Domain Randomize method which randomize texture and color of objects. And test the trained model in real world environment. Estimate the target object(cube)'s x, y position from real world image and pick up target object using IK solver according to estimated pose.  Also we train Deep RL algorithms to generate policy for pick a target object in simulation. We make use of well-known RL algorithms(TRPO, PPO, Qt-Opt..) to train policy.Project DetailProject: GIST AILab AI Experimental Research Donation: GISTTerm: 2019.01.01 ~ 2019.12.31 (12 months)Expense: 35,000,000 WON Contact- Name: RaeYoung Kang, Integrated Ph.D. program- Mail: raeyo@gm.gist.ac.kr
  • 등록일2020-06-25 07:56:46
Artificial intelligence-based algorithm development for sleep analysis and diagnosis of sleep apnea syndromes 이미지
Artificial intelligence-based algorithm development for sleep analysis and diagnosis of sleep apnea syndromes
  • 작성자Artificial Intelligence Laboratory
  • 조회수2
  • OverviewThis project aims to develop an in-sleep treatment system that diagnoses obstructive sleep apnea using deep learning and biosignal information acquired from contactless or wearable sensor  Key Researches-  Development of deep learning algorithm for automatic sleep scoring based on Polysomnography datan  Polysomnography data acquisition from home and abroadn  Development of a simple and accurate deep learning model for automatic sleep scoring (Intra- and Inter- epoch Temporal Context Network)-  Development of a deep learning algorithm that estimates respiration pattern by using UWB radarn Optimal sensor selection for estimating respiration pattern in contactless – UWB sensor  n  Development of a deep learning model for estimating respiration pattern- Development of intelligent respiration estimation and apnea classification network (REACTNet) Project ExplainationIn this project, we acquired bio-signals obtained from sleep apnea patients and developed deep learning models for sleep stage scoring and sleep apnea detection by analyzing acquired biological signals such as EEG, EMG, EOG, and Respiration. Also, this project aims to establish untact, real-time sleep scoring and sleep apnea detection system based on UWB sensor data.Project DetailDonation : GISTTerm : 2018. 03. 01 ~ 2019. 12. 31Expenses : 190,000,000 Contact- name : SungJu Lee, Ph.D Course-​ E-mail : lsj2121@gm.gist.ac.kr
  • 등록일2020-06-24 22:39:36
QUICK
MENU