반응형
ENV.
Jetson AGX Xavier with JetPack 4.5.1
Ubuntu 18.04
Building From Source
1. Ensure apt-get is up to date
$ sudo apt-get update && sudo apt-get upgrade
- Note: Use
sudo apt-get dist-upgrade
, instead ofsudo apt-get upgrade
, in case you have an older Ubuntu 14.04 version
2. Install Python and its development files via apt-get (Python 2 and 3 both work)
$ sudo apt-get install python python-dev
$ sudo apt-get install python3 python3-dev
-
- Note: The project will only use Python 2 if it can't use Python 3
3. Run the top level CMake command with the following additional flag-DBUILD_PYTHON_BINDINGS:bool=true
:
$ git clone https://github.com/IntelRealSense/librealsense.git
$ cd ./librealsense
$ mkdir build
$ cd build
$ cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true
- Note: To force compilation with a specific version on a system with both Python 2 and Python 3 installed, add the following flag to CMake command:
-DPYTHON_EXECUTABLE=[full path to the exact python executable]
- Error
Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR) $ sudo apt-get install libssl-dev
The Xinerama headers were not found $ sudo apt-get install xorg-dev libglu1-mesa-dev
$ make -j4
$ sudo make install
4. update your PYTHONPATH environment variable to add the path to the pyrealsense library
$ export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/pyrealsense2
$ source ~/.bashrc
5. Alternatively, copy the build output (librealsense2.so and pyrealsense2.so) next to your script.
- Note: Python 3 module filenames may contain additional information, e.g.
pyrealsense2.cpython-35m-arm-linux-gnueabihf.so
)
Examples
For a list of full code examples see the examples folder
# First import the library
import pyrealsense2 as rs
# Create a context object. This object owns the handles to all connected realsense devices
pipeline = rs.pipeline()
pipeline.start()
try:
while True:
# Create a pipeline object. This object configures the streaming camera and owns it's handle
frames = pipeline.wait_for_frames()
depth = frames.get_depth_frame()
if not depth: continue
# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and approximating the coverage of pixels within one meter
coverage = [0]*64
for y in xrange(480):
for x in xrange(640):
dist = depth.get_distance(x, y)
if 0 < dist and dist < 1:
coverage[x/10] += 1
if y%20 is 19:
line = ""
for c in coverage:
line += " .:nhBXWW"[c/25]
coverage = [0]*64
print(line)
finally:
pipeline.stop()
NumPy Integration
Librealsense frames support the buffer protocol. A numpy array can be constructed using this protocol with no data marshalling overhead:
import numpy as np
depth = frames.get_depth_frame()
depth_data = depth.as_frame().get_data()
np_image = np.asanyarray(depth_data)
ref.
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python
반응형