This is a project for developing a companion robot to help nurses in a pediatric intensive care unit (PICU). There is a shortage of nurses in Taiwan, due to several factors such as low salaries, unsatisfying working environment, and the reduction of young working labors in the population structure caused by sub-replacement fertility. In a PICU, children have to be separated from their parents for most time due to the sanity issue. In contrast, a companion robot can be totally sanitized and stay in the PICU. In practice, a PICU nurse usually needs to take care of multiple wards to make the operation economic. When a nurse is busy for one ward, she/he needs a helper to look after other wards. Currently such a duty is delegated to a nurse colleague, so there are multiple nurses on duty in a PICU. Introduction slides (Link) Code is available on my GitHub webpage (Link).
For intraocular microsurgery, robotic assistance is a cutting-edge research field because it is promising for expanding human capabilities and improving the safety and efficiency of the intricate surgery process. Because depth is critical for intraocular microsurgery, in this project we want to estimate the depth of the medical devices in an eye ball. We want to develop a method which can warm the operator if the device is too close to the retina.
Current research results have been published in ECCV’24 and the paper is available in my publication webpage (Link). However, there are some puzzles unsolved yet. For example, why does this method work poorly on some action datasets? How will the quality of text affect the method’s performance? If we refine the text description of the NTU RGB+D dataset, what will happen? Is the skeleton data of the NTU RGB+D dataset so noisy that the proposed method works? Given a high-quality skeleton action dataset, will the proposed method still work? To answer those questions, further research is required to be carried out.
Motivation: There is a common problem among current medical datasets, which are either private or of a small scale. Because large-scale publicly available datasets are very rare, in particular in terms of high-resolution high-quality image ones, we wonder if we can adopt some effective ways to synthesize high-resolution high-quality medical images.
Approach: There are some knowledge distillation methods and generative models such as diffusion models. We take a survey among those methods to look for proper candidates for our purpose.
Motivation: Many image generative algorithms and tools rapidly become easy to use such as ControlNet, Krita, and ComfyUI. We wonder how to utilize them to effectively generate indoor scenes for the construction purpose. There are many programming tasks and research issues behind this goal.
This is an industry-cooperated project. Given a query shop drawing, we aim to retrieve similar shop drawings in a large dateset component-by-component. The purpose of this project is to encourage employees to use standard drawings and to reduce the cost of creating repeated or highly similar drawings.
Image courtesy: Tongue Image Segmentation and Constitution Identification with Deep Learning, Master’s thesis of Mr. Chien-Ho Lin (林建和) of the Traditional Chinese Medicine Institute at Chang Gung University 2022, advised by Prof. Hsien-Hung Yang (楊賢鴻) and Prof. Jiann-Der Lee (李建德).
The primary researcher of this project is a Ph.D. student of the CGU school of Traditional Chinese Medicine and I am a consultant of this project. We want to compile a dataset consisting of tongue images and syndromes of digestive system diseases. Thereafter, we can train a computer vision model to predict digestive system diseases from tongue images. In the theory of traditional Chinese medicine, digestive system diseases and tongue appearance are highly related. But modern western medicine system totally ignores it. The most challenging part of this research lies in the data collection, segmentation and labeling. Because this is an application-oriented research project, they plan to use the well-developed ResNet as their prediction model. Thus, the contribution of this project is to validate the traditional Chinese medicine theory and develop a practical method to apply it rather than exploring computer vision techniques.
Motivation: We want to learn from publicly available large-scale brain tumor segmentation datasets, and apply the learned knowledge on our own small-scale private dataset. However, we do not have the ground-truth segmentation labels for our won private dataset and we do not have experts to label those MRI images. To validate our algorithm, we downloaded another small-scale brain tumor dataset with labeled segmentation data.
This is my master student Brian Liu’s research topic. He wants to develop an AI wristband to address the elderly care problem because there will be more aged people in Taiwan soon and the cost of caring them will be tremendous. He participated in several competitions to promote his idea and prototype. Rich information is available in his YouTube video below (language in Chinese and narration in Mandarin, suggested playing speed: 0.8).
one ECCV'24 paper accepted - August 28, 2024
Brian Liu's team won a 100k-NTD startup supporting fund from the Ministry of Education - April 22, 2024
Brian Liu's team wins the championship of the 2023 CGU Student Startup Competition - December 7, 2023