본문 바로가기 대메뉴 바로가기
open
Close

인공지능 연구실

Project

Current

게시물 검색
조립 작업과 같은 비정형 작업환경에 지능적으로 대응할 수 있는 심층강화학습 기반의 공유작업 기술 개발 이미지
조립 작업과 같은 비정형 작업환경에 지능적으로 대응할 수 있는 심층강화학습 기반의 공유작업 기술 개발
  • 작성자Artificial Intelligence Laboratory
  • 조회수314
  • Shared autonomy based on deep reinforcement learning for responding intelligently to unfixed environments such as robotic assembly tasks     Abstract 조립 작업과 같은 비정형 작업환경에 지능적으로 대응할 수 있는 심층강화학습 기반의 공유작업 기술 개발     Overview- 비정형 환경에서도 사용할 수 있는 파지 알고리즘 개발- 공유 작업 환경에서 로봇이 인간의 의도를 파악하기 위한 알고리즘 개발- Grasping algorithm for unfixed environments- Worker intention recognition algorithm for shared autonomy environment.     Key Researches- Grasp point detection- Robot grasping- Human intention recognition     Project Explaination                                                                            본 과제는 비정형 작업환경에 대해서 강인한 조립 로봇을 개발하는 과제입니다. 고려대와의 협업을 통해서 여러가지 변화가 있는 정형적이지 않은 환경속에서 조립 물품을 찾고 필요한 곳에 조립을 할 수 있는 로봇을 위한 알고리즘을 개발하는 작업을 진행합니다.   지스트 인공지능 연구실에서는 딥러닝 기반의 물품 파지 알고리즘를 개발합니다. 비정형 환경속에서도 잘 동작할 수 있도록 Sim-to-Real기법 등에 대해서 연구합니다. 또한 비정형 환경속에서 사람의 지시를 로봇이 활용할 수 있도록 딥러닝 기반의 의도 파악 알고리즘에 대해서 연구합니다.  This project is about development of assembly robot which is robust to unfixed work environment. With Korea university, we develop grasp algorithm and human intention recognition algorithm for assembly tasks in unfixed environment.   We, GIST artificial intelligence laboratory, develop object grasp algorithm based on deep learning. Also, we research about Sim-to-Real technology for grasp algorithm, and we research about deep learning based intention recognition algorithm to make robot recognize the workers’ command in many situation.     Project Detail- 출연       : 산업통상자원부, 한국산업기술평가관리원- 기간       : 2020 ~ 2021 (1단계)- 사업비    : 1,030,920,000 WON- 컨소시움 : 고려대학교(주관기관), 광주과학기술원(참여기관)- Donation : Ministry of Trade, Industry and Energy, Korea Evaluation Institute of Industrial Technology- Term : 2020 ~ 2021 (1단계)- Expenses : 1,030,920,000 WON- Consortium : Korea University, GIST     Contact- 이름 : 김태원 박사과정 - 메일 : ailab.ktw@gm.gist.ac.kr
  • 등록일2020-07-04 21:59:25
설치가 용이한 지능형 다관절 식사보조 로봇 개발, 2019-2021 이미지
설치가 용이한 지능형 다관절 식사보조 로봇 개발, 2019-2021
  • 작성자Artificial Intelligence Laboratory
  • 조회수241
  •  - Development of a Caring Robot with AI-based Dietary Assistance and Meal History Management Functions for the Elderly and the Disabled- Multi-joint Single-arm Robot Arm based Meal Assistance Robot- Food Recognition and AI-based Meal History Management Suitable for Korean-style MealsKey Researches- 한국형 식판 식사 이미지 데이터셋 구축- 음식 세그멘테이션 알고리즘 개발- 식사 이력관리 서비스 개발  This task is to develop meal-assisted robot that provides Korean food tray and meal.It provides dietary assistance through multi-joint single-arm robotic arm and provides various types of UI (eyes, buttons, joystick, voice recognition) and AI-based meal history management.  GIST artificial intelligence laboratory develops deep learning-based Korean food recognition algorithm to recognize food on the food tray and develop meal history management service.While traditional food recognition models often extract food tray features, this study develops deep learning models that utilize food tray-independent food textures.We have developed a food tray photo data collection system for Korean food recognition and are building datasets.  We develop a deep learning-based food segmentation model for food recognition to recognize the type and location of food, and use the deep learning model to recognize the relative position on the food tray.We also develop a meal history management system based on food recognition results and develop applications suitable for mobile devices. Project Detail- 출연: 산업통상자원부, 한국산업기술평가관리원- 기간: 2019.04.01 ~ 2021.12.31 (33개월)- 사업비: 3,680,000,000- 컨소시움: (주)싸이맥스, 전자부품연구원, 한국생산기술연구원, 전남대학교산학협력단, 분당서울대병원 - Expenses: 3,680,000,000- Consortium: Cymechs, Korea Electronics Technology Institute(KETI), Korea Institute of Industrial Technology(KITECH), University Industry Liaison office of CNU, Soul National University Bundang Hospital     Contact- 이름: 이주순 박사과정- 메일: joosoon1111@gist.ac.kr
  • 등록일2020-06-05 09:19:57
지속적 지능 증강 및 공유를 통한 클라우드 기반 로봇 복합 인공지능 핵심기술 개발 사업 이미지
지속적 지능 증강 및 공유를 통한 클라우드 기반 로봇 복합 인공지능 핵심기술 개발 사업
  • 작성자Artificial Intelligence Laboratory
  • 조회수165
  •   In this project, we develop AI-based meal assistance and meal history management functions for the elderly and the disabled. In detail, we manage meal history for the elderly and the disabled by figuring out the food on the plate using deep learning automatically.  To augment the robot intelligence by sharing and transferring the data, AI model, and knowledge required for robot manipulation through the cloud-based macro systems.  In the whole cloud systems, cloudlet means local sites, which utilize robots for the industrial, manufactural, and service purpose. Center cloud means computing server, which collects all of the data, AI models, and other knowledge related to robot manipulation from the multiple cloudlet.  To automate the sharing and transferring process in massive cloud system, we detect the unknown data, called out-of-distributed data, in each cloudlet. When the unknown data is detected in the cloudlet, our system request the unknown-related information to center cloud automatically.  After the request for unknown-related information to center, AI in center model transfer the unknown-related information, collected from other sites, to the cloudlet’s AI model through the knowledge distillation techniques.  By maintaining the request of unknown-related information and transferring process through the cloud, robot intelligence from multiple sites(cloudlet) are augmented by shared and transferred information.    Key Researches- Cloud Robotics - Intelligence Augmentation- Knowledge Distillation- Knowledge Transfer- Abnormal Detection- Out-of-distribution detection     Project Members- Sungho Shin- Yeonguk Yu- Jongwon Kim     Contact- 이름 : 신성호 박사과정-​ 메일 : hogili89@gist.ac.kr
  • 등록일2020-06-04 14:37:41
군집 자율운항 알고리즘 개발 이미지
군집 자율운항 알고리즘 개발
  • 작성자Artificial Intelligence Laboratory
  • 조회수223
  • Develop Algorithm of Swarm USV Autonomous NavigateOverview- 데이터 전처리 및 멀티 모달 임베딩 알고리즘 개발- 군집 상황 인식 및 의도추론 알고리즘 개발- 광햑 영상 기반 해상 물체 자동 탐지 기술 개발- Development of multi-modal embedding and deduction situational awareness algorithm.- Development of inference intention algorithm.- Development of improving Video quality and object detection and object tracking algorithm. Key Researches- 데이터 전처리 및 멀티모달 임베딩- Data Preprocessing and Multi Modal Embedding- 상황 인식 및 의도 추론-  Awareness Situation and Inference Intention- 광학 영상 기반 해상 물체 탐지- Maritime Object Detection based Optical ImageProject Explaination  해상에서 얻을 수 있는 다양한 종류의 데이터베이스를 통합하는 시스템을 설계하여 멀티 모달 임베딩 및 상황 인식 기술을 개발한다. 카메라, 레이더 및 라이더와 같은 다양한 유형의 정보가 해상에서 수집된다. 이러한 정보를 상황 인식 정보로 재구성하여 의도 추론을 위한 데이터 베이스를 만든다. 멀티 모달 임베딩된 데이터를 사용하여 군집 상황에서 상대 선박의 의도와 단기적 기동 범위 및 위치를 추론하는 알고리즘을 개발한다.  해상 전장 상황에서 적의 의도인 COA(Cource Of Action)를 추론하는 것은 최소한의 피해로 승리를 이끄는데 중요한 요소이다. 따라서 군함의 의도를 유추한다면 해상 전장 상황에서 아함에 상황에 맞는 적절한 명령을 내릴 수 있다. 해상 광학 영상 데이터의 전처리와 선박 물체 탐지 및 추적하는 알고리즘을  개발한다. 해상에서 획득한 광학 영상 데이터는 선박의 이동과 물방울, 안개, 비 등으로 인해 질이 낮은 데이터를 얻을 수 있다. 따라서 영상의 질을 높이는 전처리 기술과 전처리된 영상으로 부터 물체 탐지 및 추적 기술을 연구한다.  Develop the multi-modal embedding and deduction situational awareness technology through design the system of integrate various types of database which can be obtained at sea. Various types of information such as camera, radar, and lidar data are acquired at sea. It is necessary to create a database for inference of intention by reorganizing these information into situational awareness information.  Develop the algorithm that inference the overall intention of the swarm situation and the range of short-term maneuver and position of opponent ship using multi-modal embedded data. Inferencing COA(Course Of Action) which is the enemy’s intention in the maritime battlefield situation is important to win with minimal damage. Inferencing the intentions of the swarm of ships, it would be possible to give appropriate orders to allies in a maritime battlefield.  Develop the algorithm that improve the quality of marine optical video information and detecting ships with tracking the same object. The optical video information acquired at sea can obtain low-quality information because of ship movement, water droplets, fog, and rain. To detect object such as ships, there is a need to improve the quality of the optical video information. In addition, object tracking technology will be studied because it is necessary to know whether the detected object is the same object as the previously detected object.  Project Detail- 출연 : 국방과학연구소- 기간 : 2020. 03. 01 ~ 2021. 11. 10.- 사업비 : 527,000,000- 컨소시움 : 선박해양플랜트연구소, 한화시스템, 씨드로닉스(주), 조선대학교, 공주대학교, 광주과학기술원- Donation: ADD(Agency for Defense Development)- Term : 2020. 03. 01 ~ 2021. 11. 10.- Expenses : 527,000,000\- Consortium : KRISO, Hanwhasystems, Seadronix Inc., Chosun Univ., Kongju Univ, GIST Contact- 이름 : 이성주 박사과정-​ 메일 : lsj2121@gm.gist.ac.kr
  • 등록일2020-05-29 13:13:01
조립 작업과 같은 비정형 작업환경에 지능적으로 대응할 수 있는 심층강화학습 기반의 공유작업 기술 개발, 2020-2023 이미지
조립 작업과 같은 비정형 작업환경에 지능적으로 대응할 수 있는 심층강화학습 기반의 공유작업 기술 개발, 2020-2023
  • 작성자Artificial Intelligence Laboratory
  • 조회수140
  • "Shared autonomy based on deep reinforcement learning for responding intelligently to unfixed environments such as robotic assembly tasks" Overview   In this project, we develop shared autonomy based on deep learning model for allowing robots to grasp and assemble unseen objects in unfixed work environment while understanding the intention of human workers.  Key Research-  Deep reinforcement learning-  Object pose estimation-  Object detection-  Grasp point detection-  Sim-to-real technique-  Domain randomization-  Robot grasping-  Human intention recognition Objective1. ​​Shared autonomy technique based on deep reinforcement learning for robotic assembly tasks in unfixed environment where position of the object is unpredictable. 2. Generalized assembly technique that can be applied for unknown assembly object by using multi-modal information including vision, contact force, robot information for training.3. Sim-to-real technique that reduces the contact force gap between simulation and real environment using contact model based on artificial neural networks.Project Explaination  Robots are actively used for automation in many industries because they have high efficiency when performing repetitive tasks in fixed work environment. However, in an unfixed work environment where the target object is changeable, simple repetitive actions are difficult to perform, and assembly work, on of the unstructured work, has not yet achieved satisfactory automation despite decades of effort. Therefore, in order for robotic assembly work to be widely used in industrial sites, it is necessary to develop an assembly method that can work in an unfixed environment and enable assembly work even when the assembly target object is changed.  As a result, three technique is required. First, shared autonomy technique based on deep reinforcement learning for robotic assembly tasks in unfixed environment where position of the object is unpredictable. Second, generalized assembly technique that can be applied for unknown assembly object by using multi-modal information including vision, contact force, robot information for training. Finally, sim-to-real technique that reduces the contact force gap between simulation and real environment using contact model based on artificial neural networks.  Once the technology proposed as this task is completed, the assembly work at the industrial site is not only faster than the existing assembly technology, but also the assembly technology can be applied to various objects. Thus, it can improve work efficiency by automating assembly processes that have not yet been automated. In addition, since the least automated process in the manufacturing industry is an assembly process, it is expected that there will be demand from companies other than those above. Project Members-  Seongho Bak-  Seunghyeok Back-  Raeyoung Kang-  Yeonguk Yu      
  • 등록일2020-05-27 19:44:52
QUICK
MENU
GIST 대표GIST 대표 GIST대학GIST대학 융합기술원융합기술원 도서관