I checked this video to see how the factory’s robot working with robot studio.
So this youtuber create the guide line on this library.
I checked this video to see how the factory’s robot working with robot studio.
So this youtuber create the guide line on this library.
I have watched this video for learning how to run robot studio.
There is a tab for importing library of gripper or camera.
After imported, the gripper is located on the 0,0,0 point of the scene. To let the library attach with the robot, go to Layout and right click of fripper and make it attach with the robot.
The attached gripper can be moved with the tool of moving or rotating.
In this step, the tip of robot gripper can remember their own position and the angle.
On the right side, the path is set by ‘teach target’. So first move the tip of gripper through my own path, and teach target and add targets of moving.
After creating path, this one makes tasks for sequence of the paths.
And then I watched this video.
I read few articles about multiple markers to know how to implement on unity.
This scanning code has class which detect object.
import Foundation
import ARKit
import SceneKit
class DetectedObject: SCNNode {
var displayDuration: TimeInterval = 1.0 // How long this visualization is displayed in seconds after an update
private var detectedObjectVisualizationTimer: Timer?
private let pointCloudVisualization: DetectedPointCloud
private var boundingBox: DetectedBoundingBox?
private var originVis: SCNNode
private var customModel: SCNNode?
private let referenceObject: ARReferenceObject
func set3DModel(_ url: URL?) {
if let url = url, let model = load3DModel(from: url) {
customModel?.removeFromParentNode()
customModel = nil
originVis.removeFromParentNode()
ViewController.instance?.sceneView.prepare([model], completionHandler: { _ in
self.addChildNode(model)
})
customModel = model
pointCloudVisualization.isHidden = true
boundingBox?.isHidden = true
} else {
customModel?.removeFromParentNode()
customModel = nil
addChildNode(originVis)
pointCloudVisualization.isHidden = false
boundingBox?.isHidden = false
}
}
init(referenceObject: ARReferenceObject) {
self.referenceObject = referenceObject
pointCloudVisualization = DetectedPointCloud(referenceObjectPointCloud: referenceObject.rawFeaturePoints,
center: referenceObject.center, extent: referenceObject.extent)
if let scene = SCNScene(named: "axes.scn", inDirectory: "art.scnassets") {
originVis = SCNNode()
for child in scene.rootNode.childNodes {
originVis.addChildNode(child)
}
} else {
originVis = SCNNode()
print("Error: Coordinate system visualization missing.")
}
super.init()
addChildNode(pointCloudVisualization)
isHidden = true
set3DModel(ViewController.instance?.modelURL)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func updateVisualization(newTransform: float4x4, currentPointCloud: ARPointCloud) {
// Update the transform
self.simdTransform = newTransform
// Update the point cloud visualization
updatePointCloud(currentPointCloud)
if boundingBox == nil {
let scale = CGFloat(referenceObject.scale.x)
let boundingBox = DetectedBoundingBox(points: referenceObject.rawFeaturePoints.points, scale: scale)
boundingBox.isHidden = customModel != nil
addChildNode(boundingBox)
self.boundingBox = boundingBox
}
// This visualization should only displayed for displayDuration seconds on every update.
self.detectedObjectVisualizationTimer?.invalidate()
self.isHidden = false
self.detectedObjectVisualizationTimer = Timer.scheduledTimer(withTimeInterval: displayDuration, repeats: false) { _ in
self.isHidden = true
}
}
func updatePointCloud(_ currentPointCloud: ARPointCloud) {
pointCloudVisualization.updateVisualization(for: currentPointCloud)
}
}
I wanted to develop this class into plane detection.
private func configureSceneView(_ sceneView: ARSCNView) {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.isLightEstimationEnabled = true
sceneView.session.run(configuration)
}
func attach(to sceneView: ARSCNView) {
//...
configureSceneView(self.sceneView!)
}
extension ARSceneManager: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// 1
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
print("Found plane: \(planeAnchor)")
}
}
This plane detection need to merge with object detection. This needs to combine object name and library.
I used previous ScanningApp.xcodeproj to detect 3d object.
I used small cuboid shape object to use above application.
And It successfully detect the 3d shape.
Next step : detect each plane
I adjust the size of bounding box.
I read one paper relative to my project.
The title is “Robot programming through augmented trajectories in augmented reality“.
This paper uses a mixed reality head-mounted display which is Microsoft Hololens and a 7-DOF robot arm. They designed an augmented reality robotic interface with four interactive functions to ease the robot programming task: 1) Trajectory specification. 2) Virtual previews of robot motion. 3) Visualization of robot parameters. 4) Online reprogramming during simulation and execution.
I used the scanning application and got one AR object file.
From using this file, I can apply other virtual scene on the real object.