(200701) writting

  1. current situations
    1. how currently people test coronavirus
      1. 현재 우리나라는 유증상자를 대상으로 선별진료소와 국민안심병원, 승차검진 진료소에서 코로나 검사를 진행한다.
      2. 선별진료소는 지정된 대형병원이나 지역구 보건소에서 기침이나 발열 등 감염증 의심증상자가 의료기관 출입 전 별도로 검사채취를 받는 기관이다.
      3. 국민안심병원은 병원 내 감염 가능성을 차단하기 위해 비호흡기 환자와 분리해 호흡기 환자 전용구역을 운영하는 의료기관이다.
      4. 승차검진 진료소는 주차장이나 대공간에서 검사 및 채취하는 곳을 이야기 한다.
      5. 이러한 진료소들에는 항상 상주 인원이 필요하다.
      6. 해외 입국자들에게도 마찬가지로 유증상자와 무증상자를 나눠, 공항에서 진단검사를 하거나 증상이 없을 경우에는 지역구에 가서 검사한다.
  2. problems
    1. So instead of a small medical professional, 
      1. In order to reduce the risk of infection and diagnose it at any distance, 의료시설과 의료진을 감염 위험으로부터 보호해야 한다.
      2. 또한 의료진에게 채취 검사만으로 과중되는 업무의 부담을 줄일 수 있다.
  3. solution
    1. we will use robot arms to improve the quality of life and develop new technologies.
      1. we need a robot that can get biometric information.
    2. The advantage of robot arms is that they can move at any angle like human arms.

  1. 진단 로봇을 원활하게 하기 위한 바이러스 테스트 로봇 개발에 대해 다루고 있다.
    1. 이 로봇으로 질병에 대해 감염위험이 없이 진단하고 검사받는 것이 가능하다.
  2. 병원에 내원하면서 생기는 전염 위험성이나 시간등의 다양한 자원을 절약할 수 있다.
    1. 또한 직접 내원하지 않기 때문에, 인력보조에도 도움이 된다.
    2. 환자와 보호자의 시간과 편의 도모도 가능하다.
    3. 의사가 현장에 가기 어렵거나 감염위험이 있는 상황에서도 환자의 감염 진단을 가능하게 하여 응급상황에 대한 긴급 조치나 질 높은 서비스를 제공하는데 도움이 될 것이다.
    4. 특히 의료진이 많이 분포되지 않은 곳에서도 로봇을 이용하면 인적자원에 대비하여 24시간 운영이 가능하다.
    5. 의료진을 감염 위험에서 보호할 수 있고 인력자원도 절약하며 노동강도도 약화될 것이다.
    6. 노동강도 약화, 방문자 편리성, 등등의 장점이 있다.

(200630) robot studio

robot studio instruction by Tim Callinan

I have watched this video for learning how to run robot studio.

There is a tab for importing library of gripper or camera.

After imported, the gripper is located on the 0,0,0 point of the scene. To let the library attach with the robot, go to Layout and right click of fripper and make it attach with the robot.

The attached gripper can be moved with the tool of moving or rotating.

In this step, the tip of robot gripper can remember their own position and the angle.

On the right side, the path is set by ‘teach target’. So first move the tip of gripper through my own path, and teach target and add targets of moving.

After creating path, this one makes tasks for sequence of the paths.

And then I watched this video.

How to create a simple pick and place program for an ABB robot using the pendant by BCIT

(200630) Writting

  1. As there is currently no effective treatment for coronavirus (COVID-19), it seems appropriate to find a strategy to quickly diagnose and deal with symptoms.
    1. There is a high possibility of infection in the long-distance movement to check for viruses during treatment, and the infection rate of medical staff is also high.
      1. The hospital diagnoses the patient’s pathological condition, shares and delivers prescriptions to the patient in order to treat the patient’s disease.
      2. However, it is difficult to diagnose when there is no condition to go to the hospital.
      3. Outlying areas of the city are difficult to access hospitals and take too long.
      4. In cities, you have to spend equally much time even if you don’t have a doctor nearby with the expertise you want.
    2. To reduce the risk of infection, tools that have the same knowledge as medical professionals but are not infected are needed.
      1. If there is a risk of droplet infection and has a disease that requires isolation, there is a risk of infection, etc. in the process of transport.
      2. In the case of a virus or respiratory disease, it can be transmitted within a distance of 3 meters, which increases the risk of infection if you pass someone else.
      3. Risk increases as you go to the hospital to collect samples for COVID-19 diagnosis.
      4. In particular, medical staff are most vulnerable to the risk of infection.
      5. It also takes a lot of manpower to test samples in hospitals.
    3. So instead of a small medical professional, we need a robot that can get biometric information.
    4. The advantage of robot arms is that they can move at any angle like human arms.
    5. In order to reduce the risk of infection and diagnose it at any distance, we will use robot arms to improve the quality of life and develop new technologies.

  1. Since the advent of the coronavirus, there have been approximately 2200 papers with medical technology using robotics.
    1. 주로 수술이나 재활에 로봇이 사용되는 경우가 많았다.
  2. Among them except for tests such as surgery and cancer ckeck, the papers that studied robots for corona tests are – papers.
    1. Design of a Low-cost Miniature Robot to Assist the COVID-19 Nasopharyngeal Swab Sampling” 에서는 면봉 샘플 채취를 위해 쉽게 조립하고 원격으로 제어 할 수있는 저렴한 소형 로봇을 제안한다. 그래서 의심스러운 환자가 자가 조립을 하고 시험 중인 사전과 사후 분해 작업을 할 수 있다. 하지만 1회성 테스트를 위해서 만들기에 많은 자재가 낭비될 수 있다.
    2. Robot swabs patients’ throats for Covid-19” 에서는 카메라로 사람의 신체구조를 인식하고 정확한 위치에 면봉을 넣어 채취할 수 있도록 제작한 로봇이다. 그러나 이는 카메라와 모니터링할 컴퓨터 인터페이스가 필요하고, 과정이 더 오래 걸린다.
  3. 우리는 reusable 하고, 다른 인터페이스가 필요치 않은 로봇을 제작하고자 한다.

(200427)How to apply multiple markers on AR?

I read few articles about multiple markers to know how to implement on unity.

  • 3ds max, Vuporia Package and Unity 3D were used to implement augmented reality system in this experiment.
  • 3ds Max produces 3D images and converts 3D images produced by external programs into formats that can be used for Unity programs.
  • The Euphoria Package identifies the recognition rate of markers manufactured for each part to display the corresponding image of the vehicle part.
  • At this time, a marker with a high recognition rate is selected and stored as a database.
  • Unity 3D sets the background to float the image and places the 3D image produced by 3ds max on the purpose coordinates.
  • We can also use C# scripts to control the movement of the image when it is shown on 3D coordinates extracted from the marker.
  • Build in the form of an application using APK (Android Package) and Java Development Kit (JDK).
  • Last-built application files can provide augmented reality programs built on various smart devices in the Android environment.

(200416) How to implement object detection to plane detection?

This scanning code has class which detect object.

import Foundation
import ARKit
import SceneKit

class DetectedObject: SCNNode {
    
    var displayDuration: TimeInterval = 1.0 // How long this visualization is displayed in seconds after an update
    
    private var detectedObjectVisualizationTimer: Timer?
    
    private let pointCloudVisualization: DetectedPointCloud
    
    private var boundingBox: DetectedBoundingBox?
    
    private var originVis: SCNNode
    private var customModel: SCNNode?
    
    private let referenceObject: ARReferenceObject
    
    func set3DModel(_ url: URL?) {
        if let url = url, let model = load3DModel(from: url) {
            customModel?.removeFromParentNode()
            customModel = nil
            originVis.removeFromParentNode()
            ViewController.instance?.sceneView.prepare([model], completionHandler: { _ in
                self.addChildNode(model)
            })
            customModel = model
            pointCloudVisualization.isHidden = true
            boundingBox?.isHidden = true
        } else {
            customModel?.removeFromParentNode()
            customModel = nil
            addChildNode(originVis)
            pointCloudVisualization.isHidden = false
            boundingBox?.isHidden = false
        }
    }
    
    init(referenceObject: ARReferenceObject) {
        self.referenceObject = referenceObject
        pointCloudVisualization = DetectedPointCloud(referenceObjectPointCloud: referenceObject.rawFeaturePoints,
                                                     center: referenceObject.center, extent: referenceObject.extent)
        
        if let scene = SCNScene(named: "axes.scn", inDirectory: "art.scnassets") {
            originVis = SCNNode()
            for child in scene.rootNode.childNodes {
                originVis.addChildNode(child)
            }
        } else {
            originVis = SCNNode()
            print("Error: Coordinate system visualization missing.")
        }
        
        super.init()
        addChildNode(pointCloudVisualization)
        isHidden = true
        
        set3DModel(ViewController.instance?.modelURL)
    }
    
    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }
    
    func updateVisualization(newTransform: float4x4, currentPointCloud: ARPointCloud) {
        // Update the transform
        self.simdTransform = newTransform
        
        // Update the point cloud visualization
        updatePointCloud(currentPointCloud)
        
        if boundingBox == nil {
            let scale = CGFloat(referenceObject.scale.x)
            let boundingBox = DetectedBoundingBox(points: referenceObject.rawFeaturePoints.points, scale: scale)
            boundingBox.isHidden = customModel != nil
            addChildNode(boundingBox)
            self.boundingBox = boundingBox
        }
        
        // This visualization should only displayed for displayDuration seconds on every update.
        self.detectedObjectVisualizationTimer?.invalidate()
        self.isHidden = false
        self.detectedObjectVisualizationTimer = Timer.scheduledTimer(withTimeInterval: displayDuration, repeats: false) { _ in
            self.isHidden = true
        }
    }
    
    func updatePointCloud(_ currentPointCloud: ARPointCloud) {
        pointCloudVisualization.updateVisualization(for: currentPointCloud)
    }
}

I wanted to develop this class into plane detection.

private func configureSceneView(_ sceneView: ARSCNView) {
        let configuration = ARWorldTrackingConfiguration()
        configuration.planeDetection = [.horizontal, .vertical]
        configuration.isLightEstimationEnabled = true

        sceneView.session.run(configuration)
    }

func attach(to sceneView: ARSCNView) {
  //...

  configureSceneView(self.sceneView!)
}

extension ARSceneManager: ARSCNViewDelegate {

    func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        // 1
        guard let planeAnchor = anchor as? ARPlaneAnchor else { return }

        print("Found plane: \(planeAnchor)")
    }

}

This plane detection need to merge with object detection. This needs to combine object name and library.

(200407) What do we need for AR with using robot arm?

I read one paper relative to my project.

The title is “Robot programming through augmented trajectories in augmented reality“.

This paper uses a mixed reality head-mounted display which is Microsoft Hololens and a 7-DOF robot arm. They designed an augmented reality robotic interface with four interactive functions to ease the robot programming task: 1) Trajectory specification. 2) Virtual previews of robot motion. 3) Visualization of robot parameters. 4) Online reprogramming during simulation and execution.