Good afternoon partners
Following the tutorial for installing usb_cam on groovy
> http://wiki.ros.org/ROSberryPi/Setting%20up%20ROS%20on%20RaspberryPi#ROS_Peripherals
I can roscd usb_cam and I have all the dependencies installed but when I do the rosmake I'm given the next :
Configuring incomplete, errors occurred! -------------------------------------------------------------------------------}
[ rosmake ] Output from build of package usb_cam written to:
[ rosmake ] /home/pi/.ros/rosmake/rosmake_output-20140116-192405/usb_cam/build_output.log
[rosmake-0] Finished <<< usb_cam [FAIL] [ 16.84 seconds ]
[ rosmake ] Halting due to failure in package usb_cam.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 36 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/pi/.ros/rosmake/rosmake_output-20140116-192405
And if I try rosrun usb_cam usb_cam_node I'm given :
[rosrun] Couldn't find executable named usb_cam_node below /home/pi/catkin_ws/src/bosch_drivers/usb_cam
[rosrun] Found the following, but they're either not files
[rosrun] or not executable:
[rosrun] /home/pi/catkin_ws/src/bosch_drivers/usb_cam/src/usb_cam_node
Could anyone help me please?
Thanks
**EDIT**
This is the output_build log:
mkdir -p bin
cd build && cmake -Wdev -
DCMAKE_TOOLCHAIN_FILE=/opt/ros/groovy/share/ros/core/rosbuild/rostoolchain.cmake ..
-- The C compiler identification is GNU 4.6.3
-- The CXX compiler identification is GNU 4.6.3
-- Check for working C compiler: /usr/bin/gcc
-- Check for working C compiler: /usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found PythonInterp: /usr/bin/python (found version "2.7.3")
[rosbuild] Building package usb_cam
Failed to invoke /opt/ros/groovy/bin/rospack deps-manifests usb_cam [rospack] Error: package/stack 'usb_cam' depends on non-existent package 'self_test' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update'
CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package 'usb_cam'. Look above for errors from rospack itself. Aborting. Please fix the broken dependency!
Call Stack (most recent call first): /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack) CMakeLists.txt:5 (rosbuild_init)
-- Configuring incomplete, errors occurred! make: * [all] Error 1
Thanks a lot
↧
[Solved]Problem with rosmake and rosrun usb_cam usb_cam_node
↧
How can I see images from usb_cam ?
Good afternoon.
I have usb_camera installed on my groovy raspbian.
I have done : rosparam set usb_cam/pixel_format yuyv and after that I wrote : rosrun usb_cam usb_cam_node
I´m being given :
> usb_cam video_device set to [/dev/video0]
usb_cam io_method set to [mmap]
usb_cam image_width set to [640]
usb_cam image_height set to [480]
usb_cam pixel_format set to [yuyv]
usb_cam auto_focus set to [0]
10 frames/sec at 1353855178.185472348
10 frames/sec at 1353855178.314173063
9 frames/sec at 1353855178.849589157
10 frames/sec at 1353855179.782349046
...
My rostopic list is the next :
/rosout
/rosout_agg
/usb_cam/camera_info
/usb_cam/image_raw
/usb_cam/image_raw/compressed
/usb_cam/image_raw/compressed/parameter_descriptions
/usb_cam/image_raw/compressedDepth
/usb_cam/image_raw/compressedDepth/parameter_updates
/usb_cam/image_raw/compressedDepth/parameter_descriptions
/usb_cam/image_raw/theora
/usb_cam/image_raw/theora/parameter_descriptions
/usb_cam/image_raw/theora/parameter_updates
What´s the next I have to do to be able to see captured images?
Thanks you so much
↧
↧
usb_cam: [ERROR]Webcam: expected picture but didn't get it...
I'm trying to use
```
rosrun usb_cam usb_cam_node
```
to read my web camera, but it keeps returning error: >[ERROR] [1392890872.217213339]: Webcam: expected picture but didn't get it...>[ERROR] [1392890872.252854641]: Webcam: expected picture but didn't get it...>[ERROR] [1392890872.285282152]: Webcam: expected picture but didn't get it...>[ERROR] [1392890872.317282039]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.352863707]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.385424928]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.421215144]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.453295782]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.485294812]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.521231924]: Webcam: expected picture but didn't get it...>
[ERROR] [1392890872.553230292]: Webcam: expected picture but didn't get it...>
...
The camera works well when using
```
guvcview
```
↧
Problem Retrieving Image from USB_cam
Hi guys
I've trying to get images from USB_cam .
I did the same as this partner here :http://answers.ros.org/question/131331/usb_cam-errorwebcam-expected-picture-but-didnt-get-it/
But when I try to get image I get something like the next imagen

What is happening?
Thanks
↧
red colour detection
i want to detect red colour in ros hydro on ubuntu 12.04.
i am using opencv .
how do i do it i am usinga usb_cam driver
↧
↧
Time difference in showed and actual webcam image
Hi all,
I'm using a Pandaboard ES (ARM) and would like to synchronize a webcam (logitech c920) and a kinect. I want to grab single frames on a keypress and save pcd and jpg to disk.
Unfortunately is there a big lag in the webcam image which prevents me from a good result.
I would like to know if there is a way to avoid the lag of 2-5 sec of my webcam image (high resolution 2304 x 1536 / YUYV format)?
I tried usb_cam, uvc_camera (which is not working properly see this [topic](http://answers.ros.org/question/47209/uvc_camera-gets-a-terrible-image-and-usb_camera-doesnt-set-framerate/)) and the gscam package. But unfortunately all of them have a big lag between the showed and the actual image. I know that the frame rate will be low ,which is not a big problem, its more that the data is the wrong one when I want to save it.
**Edit 1:**
I looked for some unwanted queue as tfoote mentioned. The usb_cam package which seems for me to have a queue size of 1 and my subscriber uses queue size of 1, too. Could it be something else or am I wrong with the usb_cam?
**Edit 2:**
I realized that my camera offers MJPEG compression directly on the webcam. This works together with the usb_cam package and I get a much higher frame rate (around 10 fps without paralleled kinect capturing and around 5 with the kinect capturing). The lag now is around 1 second but the resolution is also lower with 1920x1080.
My first guess was that the camera has a internal memory which gets filled when the frames are not processed in time. Therefore I lowered the frame rate directly within the v4l2 driver to 5fps. This didn't brought better results also.
Would be great to get your thoughts about this problem?
**side note:** I get follwoing warning when using MJPEG compression
[swscaler @ 0x1c57aa0] No accelerated colorspace conversion found from yuv422p to rgb24.
which is probably a ffmpeg related and issue and regarding to their forum shouldn't cause big problems. I tried to solve it by installing various versions of ffmpeg without success.
↧
how can I receive single images from an image_transport stream using a service client
Hello,
I try to setup a Service which provides single images out of an image_transport stream to a client requesting an image.
I found this Question 42909 (Karma not sufficient to post links, sorry.) which is in principal what I need.
answers.ros.org/question/42909/how-can-i-use-a-servicerequest-to-capture-a-single-callback-from-a-subscription/
Since I'm new to ROS and the example is simplified I'm stuck at the point where the image is presented to the client.
What I would need to know is:
1) How does the srv file look like to use the example. (If a srv file is needed at all)
2) How can I actually get access to the image on client side to do image processing on it. In the example this part is simplified with the comment: read_grab() ?
I searched ROS answers for hours with no success so I hope somebody understands my problem description and is willing to help.
thanks in advance HXP2
↧
Webcam does not display when using visp_auto_tracker
Hi everyone,
I am trying to get a Logitech webcam to work with visp_auto_tracker to do some QR tracking. I am running ROS Hydro on Ubuntu. I started off with usb_cam and that worked fine with the webcam, so now I am trying the live video example launch file (tracklive_usb.launch) on the wiki page at wiki.ros.org/visp_auto_tracker. The webcam light turns on, but there is no camera feed, and I get this error:
[ WARN] [1402326372.657815348]: Camera calibration file /home/toshiba/.ros/camera_info/head_camera.yaml not found.
terminate called after throwing an instance of 'std::runtime_error'
what(): uncalibrated camera
[visp_auto_tracker-1] process has died [pid 17186, exit code -6, cmd /opt/ros/hydro/lib/visp_auto_tracker/visp_auto_tracker /visp_auto_tracker/camera_info:=/usb_cam/camera_info /visp_auto_tracker/image_raw:=/usb_cam/image_raw __name:=visp_auto_tracker __log:=/home/toshiba/.ros/log/fc4ac0ee-eb3d-11e3-a916-70f1a177b603/visp_auto_tracker-1.log].
log file: /home/toshiba/.ros/log/fc4ac0ee-eb3d-11e3-a916-70f1a177b603/visp_auto_tracker-1*.log
So then I try to calibrate the camera with this line by following wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/camera/image_raw camera:=/usb_cam
But I get this message:
('Waiting for service', '/usb_cam/set_camera_info', '...')
Service not found
I am pretty new to ROS having just started a few weeks ago, so I'm kind of stumped. Is it really a problem with calibration? Any help is much appreciated. Thanks!
↧
Problem displaying webcam on Beaglebone Black ROS Hydro
Hi everyone,
I have Ubuntu 12.04 and ROS Hydro installed on a Beaglebone Black and I am trying to connect a webcam to it. When I do lsusb, i see the webcam listed and it is recognized as /dev/video0. However, when I run:
rosrun usb_cam usb_cam_node
I get this error at the bottom:
[ INFO] [1402520069.293107674]: Camera name: head_camera
[ INFO] [1402520069.296888674]: Camera info url:
[ INFO] [1402520069.301380049]: usb_cam video_device set to [/dev/video0]
[ INFO] [1402520069.304736965]: usb_cam io_method set to [mmap]
[ INFO] [1402520069.307870465]: usb_cam image_width set to [640]
[ INFO] [1402520069.311110715]: usb_cam image_height set to [480]
[ INFO] [1402520069.314403132]: usb_cam pixel_format set to [mjpeg]
[ INFO] [1402520069.317508174]: usb_cam auto_focus set to [0]
[swscaler @ 0x3f640] No accelerated colorspace conversion found from yuv422p to rgb24.
[ INFO] [1402520069.502437715]: using default calibration URL
[ INFO] [1402520069.506553799]: camera calibration URL: file:///home/ubuntu/.ros/camera_info/head_camera.yaml
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
[swscaler @ 0x3fee0] No accelerated colorspace conversion found from yuv422p to rgb24.
Is there a way to fix this conversion issue? Googling hasn't helped me much with this. I appreciate any kind of help...thanks!
↧
↧
How to subscribe visp_auto_tracker to /usb_cam/image_raw/compressed?
Hi all,
I am wondering if this is possible, since just subscribing to /usb_cam/image_raw causes a lot of lag, and the visp_auto_tracker debug display window will freeze up. Here is my launch file to run visp_auto_tracker (where I try to subscribe to /usb_cam/image_raw/compressed):
But when I run this, I get this error in the terminal where I am running the usb_cam node:
[ERROR] [1405534682.215726942]: Client [/visp_auto_tracker] wants topic /usb_cam/image_raw/compressed to have datatype/md5sum [sensor_msgs/Image/060021388200f6f0f447d0fcd9c64743], but our version has [sensor_msgs/CompressedImage/8f7a12909da2c9d3332d540a0977563f]. Dropping connection.
So, is there a way to properly subscribe the visp_auto_tracker to /usb_cam/image_raw/compression? Appreciate any help, thanks!
↧
ptam and cameracalibrator won't start
Hi,
i'm trying to get ptam working but it doesn't start up. I use usb_cam and since it shouted that it had no calibrationfile i created one with the [camera_calibrator](http://wiki.ros.org/camera_calibration) node (not ptam). The [resulted yaml file](http://codepaste.net/w179fq) is in ~/.ros/camera_info/head_camera.yaml .
I tried to startup ptam with the launcher and just by running the node.
Since running the Node resulted in
[ERROR] [1405509643.099105236]: Camera calibration is missing!
I thought perhaps i have to recalibrate it with the ptam cameracalibrator but that doesn't work either.
For both launch files the console says that Gui is on but none of them starts.
For the ptam launch file i get
[FATAL] [1405526258.946873148]: ASSERTION FAILED
file = /home/gerken/ROS/cat_workspace/src/ethzasl_ptam/ptam/src/System.cc
line = 95
cond = img->encoding == sensor_msgs::image_encodings::MONO8 && img->step == img->width
Could it be that i have to put the encoding to mono8 anyhow?
Camera Info is
---
header:
seq: 8394
stamp:
secs: 1405526539
nsecs: 828073023
frame_id: head_camera
height: 480
width: 640
distortion_model: plumb_bob
D: [0.105716, -1.882404, -0.020112, 0.008704, 0.0]
K: [1043.949364, 0.0, 312.91979, 0.0, 1042.363345, 178.105619, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [1036.78418, 0.0, 314.804614, 0.0, 0.0, 1031.130981, 173.315955, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
---
ptam.launch is
I didn't change PtamFixValues.yaml as the with and the height of the picture is the default value.
Thanks for your Help!
Andreas
↧
Camera model not correctly specified.
I'm trying to get rpg_svo - a project implementing semi-direct visual odometry - running. I can run the sample codes on a dataset, but not on a camera live stream.
I am using Indigo with the latest pull from the image_pipeline. I hope the error is not because I am using Indigo and not a more stable version of ROS.
Here is my terminal dump:
SUMMARY
========
CLEAR PARAMETERS
* /svo/
PARAMETERS
* /rosdistro: indigo
* /rosversion: 1.11.7
* /svo/cam_topic: /usb_cam/image_raw
* /svo/camera_matrix/cols: 3
* /svo/camera_matrix/data: [423.946127, 0, 3...
* /svo/camera_matrix/rows: 3
* /svo/camera_name: narrow_stereo
* /svo/distortion_coefficients/cols: 5
* /svo/distortion_coefficients/data: [-0.395356, 0.132...
* /svo/distortion_coefficients/rows: 1
* /svo/distortion_model: plumb_bob
* /svo/grid_size: 30
* /svo/image_height: 480
* /svo/image_width: 640
* /svo/loba_num_iter: 0
* /svo/max_n_kfs: 10
* /svo/projection_matrix/cols: 4
* /svo/projection_matrix/data: [312.081909, 0, 2...
* /svo/projection_matrix/rows: 3
* /svo/rectification_matrix/cols: 3
* /svo/rectification_matrix/data: [1, 0, 0, 0, 1, 0...
* /svo/rectification_matrix/rows: 3
NODES
/
svo (svo_ros/vo)
core service [/rosout] found
process[svo-1]: started with pid [14234]
create vo_node
[ WARN] [1406362716.365539658]: Cannot find value for parameter: svo/publish_img_pyr_level, assigning default: 0
[ WARN] [1406362716.367242172]: Cannot find value for parameter: svo/publish_every_nth_img, assigning default: 1
[ WARN] [1406362716.368814410]: Cannot find value for parameter: svo/publish_every_nth_dense_input, assigning default: 1
[ WARN] [1406362716.372849123]: Cannot find value for parameter: svo/publish_world_in_cam_frame, assigning default: 1
[ WARN] [1406362716.374322519]: Cannot find value for parameter: svo/publish_map_every_frame, assigning default: 0
[ WARN] [1406362716.375674323]: Cannot find value for parameter: svo/publish_point_display_time, assigning default: 0
[ WARN] [1406362716.685035733]: Cannot find value for parameter: svo/publish_markers, assigning default: 1
[ WARN] [1406362716.685835016]: Cannot find value for parameter: svo/publish_dense_input, assigning default: 0
[ WARN] [1406362716.686585599]: Cannot find value for parameter: svo/accept_console_user_input, assigning default: 1
[ERROR] [1406362716.687362055]: Cannot find value for parameter: svo/cam_model
terminate called after throwing an instance of 'std::runtime_error'
what(): Camera model not correctly specified.
[svo-1] process has died [pid 14234, exit code -6, cmd /home/long/Workspaces/catkin/devel/lib/svo_ros/vo __name:=svo __log:=/home/long/.ros/log/5b375c1a-149d-11e4-8bc0-24f5aa6f5f4f/svo-1.log].
log file: /home/long/.ros/log/5b375c1a-149d-11e4-8bc0-24f5aa6f5f4f/svo-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
This is my launchfile for usb_cam:
And the launch file of svo_ros:
Any idea?
↧
visp_auto_tracker does not output pose data and unable to track properly
Hi all,
visp_auto_tracker was working perfectly fine for me until today...when I ran visp_auto_tracker, I got the message below repeatedly as the webcam tracked the QR code.
*********** Parsing XML for Mb Edge Tracker ************
ecm : mask : size : 5
ecm : mask : nb_mask : 180
ecm : range : tracking : 10
ecm : contrast : threshold 5000
ecm : contrast : mu1 0.5
ecm : contrast : mu2 0.5
sample : sample_step : 4
sample : n_total_sample : 250
klt : Mask Border : 0
klt : Max Features : 10000
klt : Windows Size : 5
klt : Quality : 0.01
klt : Min Distance : 20
klt : Harris Parameter : 0.01
klt : Block Size : 3
klt : Pyramid Levels : 3
face : Angle Appear : 75
face : Angle Disappear : 75
camera : u0 : 325.03 (default)
camera : v0 : 219.983 (default)
camera : px : 523.553 (default)
camera : py : 520.561 (default)
(L0) !! /tmp/buildd/ros-hydro-visp-2.9.0-4precise-20140617-1429/src/camera/vpCameraParameters.cpp: get_K(#403) :
getting K matrix in the case of projection with distortion has no sense
Tracking failed
getting K matrix in the case of projection with distortion has no sense
Tracking done in 33.239 ms
When I check "rostopic echo /visp_auto_tracker/object_position", I saw that I was no longer getting any pose data from the webcam. The tracking markers that were supposed to show up on the debug display window were no longer there as well. I noticed there were only 8 markers at the corners of the QR code labelled: mi1, mi2, mi3, mi4, mo1, mo2, mo3, mo4. I'm kind of stumped since this problem came out of nowhere and I don't recall changing anything either. Any help is greatly appreciated. Thanks!
↧
↧
Camera Calibration Uses Wrong Aspect Ratio from USB Camera
We are trying to calibrate a USB camera running at 1080p (i.e., 1920x1080). When launching `usb_cam` and then `image_view`, the image comes up correctly at the right aspect ration.
`rosrun usb_cam usb_cam_node _image_width:=1920 _image_height:=1080 _pixel_format:=yuyv`
`rosrun image_view image_view image:=/usb_cam/image_raw`

However, whenever we try and run the camera calibration node, the aspect ratio of the image seems to be reversed:
`rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.025 image:=/usb_cam/image_raw camera:=/usb_cam`

We are running Ubuntu 14.04 64bit with the latest Indigo build.
↧
uvc_camera: pixel format unavailable with USBTV007 EasyCAP
Hi guys,
I'm using a USBTV007 EasyCap (ID 1b71:3002 by lsusb) to capture wireless video. The system can already see it in /dev/ as video0 and I can use VLC media player to display the video on the screen and even record the video to a file. However, when I tried using uvc_camera to access the camera, this appeared:
pixfmt 0 = 'YUYV' desc = '16 bpp YUY2, 4:2:2, packed'
terminate called after throwing an instance of 'std::runtime_error'
what(): pixel format unavailable
It cannot start, and I cannot get the video to ROS.
I also tried with uvc_cam and usb_cam, and they don't work too.
I'm running Ubuntu 12.04 with Groovy. Also the kernel is updated to 3.11 (I found [here](http://www.linuxtv.org/wiki/index.php/Easycap#USBTV007_EasyCAP) that from 3.11, this model is supported; before the update the device was not seen as a video device.)
Any solutions or suggestions are appreciated.
Thank you,
Veerachart
↧
Minoru 3D webcam calibration ros indigo
Hi everyone,
i just bought a minoru 3d webcam to try using it as a low cost RGBD sensor, i'm working in ros indigo, i tried calibrationg the camera with the cameracalibrator.py util, the problem is that no matter how much frames i take the extrinsic calibration always fail because with the driver i was using there is too much lag between the frames of the two channels ( the camera is not synchronized ).
I tried several other drivers and the best so far ( the one that gives the best visual results in terms of lag ) seems the "usb_cam" driver. But if i use this driver the cameracalibrator.py node can't start i think this problem is related to the "set_camera_info" service as described in this link:
http://www.iheartrobotics.com/2010/05/testing-ros-usb-camera-drivers.html
The fix they suggest in the python code is already present in my version but still it doesn't work.
I don't need that service, since i just need to get calibration matrices i can set manually the calibration files.
What can i do? Thank you all in advance!
EDIT:
Here is my launch file:
And this is the command i use to launch the camera calibrator:
Now the cameracalibrator starts properly, but i can see on the calibration window that the frames are still visually unsynchronized (i.e. if i blink i don't see this happening simultaneously in the two visualization windows ) I've also tried different resolutions and framerates.
I also added the flag `--approximate=0.01` up to 0.05 ( it crashes at 0.1 )`
rosrun camera_calibration cameracalibrator.py --size 9x6 --square 0.011 right:=/right_camera/image_raw left:=/left_camera/image_raw right_camera:=/right_camera left_camera:=/left_camera --no-service-checkAnd when i do so, simply nothing happens, no errors, no crashes. Maybe it keeps waiting for something? This is the output of the rostopic list command:
/left_camera/camera_info /left_camera/image_raw /left_camera/image_raw/compressed /left_camera/image_raw/compressed/parameter_descriptions /left_camera/image_raw/compressed/parameter_updates /left_camera/image_raw/compressedDepth /left_camera/image_raw/compressedDepth/parameter_descriptions /left_camera/image_raw/compressedDepth/parameter_updates /left_camera/image_raw/theora /left_camera/image_raw/theora/parameter_descriptions /left_camera/image_raw/theora/parameter_updates /right_camera/camera_info /right_camera/image_raw /right_camera/image_raw/compressed /right_camera/image_raw/compressed/parameter_descriptions /right_camera/image_raw/compressed/parameter_updates /right_camera/image_raw/compressedDepth /right_camera/image_raw/compressedDepth/parameter_descriptions /right_camera/image_raw/compressedDepth/parameter_updates /right_camera/image_raw/theora /right_camera/image_raw/theora/parameter_descriptions /right_camera/image_raw/theora/parameter_updates /rosout /rosout_aggEDIT2: I've edited my launch file to use uvc_camera:
↧
pkg_check_modules doesn't get the right libavcodec
I've struggled some time to make usb_cam to support mjpeg format usb camera, as libavcodec and libswscale that come with ROS somehow cannot work properly. it complains No accelerated colorspace conversion found from yuv422p to rgb24.
So I installed a new ffmpeg that enables libx264. After that, two versions of ffmpeg exist in my file system, the old one locates in /usr/lib/x86_64-linux-gnu, the new one in /usr/local/lib. In order to find the correct version, I changed two lines in the CMakeLists.txt :
pkg_check_modules(avcodec libavcodec>=56.12 REQUIRED)
pkg_check_modules(swscale libswscale>=3.1 REQUIRED)
If I run cmake inside the usb_cam directory, pkg_check_modules can find the correct header file and libraries, however if I run catkin_make, it returns the old ones. how could pkg_check_modules behave differently with/without ROS? Don't know what to do ...
Thanks to ahendrix! I added a few lines to print some pkgconfig-generated value
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_LIBRARIES})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_LIBRARY_DIRS})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_LDFLAGS})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_INCLUDE_DIRS})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_CFLAGS})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_VERSION})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_PREFIX})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_INCLUDEDIR})
MESSAGE(STATUS "@@@@@@@@@@@@@@@@@ " ${avcodec_LIBDIR})
After I delete build/ and devel/ and run catkin_make as ahendrix suggestes, cmake output message shows it found the right version
-- checking for module 'libavcodec'
-- found libavcodec, version 56.13.100
-- checking for module 'libswscale'
-- found libswscale, version 3.1.101
-- @@@@@@@@@@@@@@@@@ avcodec
-- @@@@@@@@@@@@@@@@@ /usr/local/lib
-- @@@@@@@@@@@@@@@@@ -L/usr/local/lib-lavcodec
-- @@@@@@@@@@@@@@@@@ /usr/local/include
-- @@@@@@@@@@@@@@@@@ -I/usr/local/include
-- @@@@@@@@@@@@@@@@@ 56.13.100
-- @@@@@@@@@@@@@@@@@ /usr/local
-- @@@@@@@@@@@@@@@@@ /usr/local/include
-- @@@@@@@@@@@@@@@@@ /usr/local/lib
however, when ran this node "rosrun usb_cam usb_cam_node _pixel_format:=mjpeg", it just end with segment fault. But if write the CMakeLists.tst like
#find_package(PkgConfig REQUIRED)
#pkg_check_modules(avcodec libavcodec REQUIRED)
#pkg_check_modules(swscale libswscale REQUIRED)
set(avcodec_INCLUDE_DIRS /usr/local/include/libavcodec)
set(swscale_INCLUDE_DIRS /usr/local/include/libswscale)
set(avcodec_LIBRARIES /usr/local/lib/libavcodec.so.56)
set(swscale_LIBRARIES /usr/local/lib/libswscale.so.3)
usb_cam turns to work. So I wonder what happens to make such difference.
↧
↧
How to use a sdk (non-ros header/lib/binaries) in ros package?
I am trying to use a camera sdk to write a camera stream publisher in ros.
It's not a uvc camera, so I cannot use usb_cam or other similar nodes to access the camera stream.
I've already had the camera sdk consisting of ASICamera.h/libASICamera.a/libASICamera.so, and I've successfully got it running.
But now I don't know how to use it in a ros node.
After I add the header files into the package/include directory
and #include "ASICamera.h" into the publisher node,
I type catkin_make. it reports:
make[2]: *** [asi_converter_real/CMakeFiles/asi_converter_real.dir/src/asi_converter_real.cpp.o] Error 1
make[1]: *** [asi_converter_real/CMakeFiles/asi_converter_real.dir/all] Error 2
I think the reason is that I have not add the .a and .so into the publisher node.
But I don't know how to do this
In the package.xml, it seems that
build_depend and run_depend tags can only access ros-inbuilt static/dynamic libraries, how could I use my own .a and .so in a ros node ?
↧
Camera calibration parameters not appropriate for SVO package
Hello, I have been trying to use the [SVO package](https://github.com/uzh-rpg/rpg_svo) using Ubuntu 12.04 and ROS Hydro, but I ran into trouble with the calibration file. In the [SVO calibration guidelines](https://github.com/uzh-rpg/rpg_svo/wiki/Camera-Calibration) says that the [camera_calibration](http://wiki.ros.org/camera_calibration) package can be used to obtain a calibration file that works with SVO, but the format of the calibration file expected by the SVO looks like this:> cam_model: Pinhole cam_width: 752 > cam_height: 480 cam_fx: 414.536145> cam_fy: 414.284429 cam_cx: 348.804988> cam_cy: 240.076451 cam_d0: -0.283076> cam_d1: 0.066674 cam_d2: 0.000896> cam_d3: 0.000778
And the one obtained with the camera_calibration, after convertir to yaml using the [camera_calibration_parsers](http://wiki.ros.org/camera_calibration_parsers) looks like this:
> image_width: 1024 image_height: 768> camera_name: narrow_stereo> camera_matrix: rows: 3 cols: 3 > data: [1114.338842, 0, 488.044082, 0,> 1483.616573, 384.580052, 0, 0, 1] distortion_model: plumb_bob> distortion_coefficients: rows: 1 > cols: 5 data: [0.089323, -0.327338,> -0.003856, -0.002099, 0] rectification_matrix: rows: 3 > cols: 3 data: [1, 0, 0, 0, 1, 0, 0,> 0, 1] projection_matrix: rows: 3 > cols: 4 data: [1117.866699, 0,> 485.532623, 0, 0, 1492.567993, 382.458084, 0, 0, 0, 1, 0]
I do not see how to make SVO work with the calibration file I'm getting and I have also not found a way to convert said file to the format that works.
I found another forum question where a person with my same problem got to the conclusion that the camera_calibration file didn't work with SVO and that the ATAN format file from [ethzasl_ptam](http://wiki.ros.org/ethzasl_ptam) had to be used instead. But when I try to implement the ptam package I get a missing header error that I haven't been able to solve either (it's [this error](https://github.com/ethz-asl/ethzasl_ptam/issues/39), but the solutions found in that thread haven't worked for me neither in Hydro nor in Groovy). Which is why I need to make the camera_calibration file work.
Thank you very much in advance for any ideas or suggestions.
↧
camera motion with viso2_ros
Hello,
(sorry for my English, not my native language) Im using ROS Indigo on Ubuntu 14.04 and my goal is to estimate robot motion via visual odometry by using ROS. Im using usb_cam and image_proc to get image_rect which are used by mono_odometer in viso2_ros package. I also calibrated my camera with camera_calibration. My camera is located at height of 0.2m so I used static_transform publisher between /base_link and /head_camera where I set coord z to 0.2. I need to do some tests without robot, using only camera and estimate camera motion. My tests are quite easy, I just need to move camera forward by 1m and get result that camera`s location was moved by 1m and do the same with rotation by 90 degrees. For this easy tests I am just using rostopic echo /mono_odometer/odometry which contains header, pose and twists, but my problem is, that Im getting some data, which dont look right ... Can u please tell me what am I doing wrong and give me some guides to get right solution?
Thank you very much,
Martin
↧