ESP32 I2C Camera
2023-11-14 | By M5Stack
License: General Public License Camera Arduino
* Thanks for the source code and project information provided by @Enrico Casti
History
This project born from a conversation in discord where it was expressed the need for a solution, for people with wheelchairs and limited visibility and head mobility, to safely drive the wheelchair on the ramps, most of the time moving backward. A previous project of mine was an intercom display (M5stack Core3) mounted on the wheelchair, so by adding some cameras, I can help with this need.
Description
As mentioned, there is an M5stack Core3 that acts as the main controller. The cameras are supposed to be mounted on the side of the wheelchair, on top of the wheels, looking backward, to offer good visibility of the path. On some occasions, a pan/tilt solution can help to cover some hidden spots.
For this project, I used two cameras I have in my boxes: one M5Camera and one Unit Camera DIY Kit. The last one is very inexpensive but doesn't offer PSRAM, so any software on it is very limited. Also, the DIY kit can mount a wide-angle sensor, to offer a visibility of 160 degrees, impressive, but I'm not sure can help the need. I will think about the idea of adding this wide-angle sensor for additional scopes. I want to give some explanation about the protocol used on this project, the I2C: I'm aware it is not the best protocol to send a lot of data, since it will keep the bus quite busy. Wi-Fi, ESP-NOW, or UART port could offer a higher bitrate, connecting directly two OV2640 modules to an esp32 (or stm32) which could also control the display could have offered a more efficient solution, but I had some constraints:
- The Wi-Fi was out of scope, as it's required for the main project (intercom) or not available outside the house or far from the router.
- UART is not possible as I need at least 2 cameras, and I don't have enough free pins to set up a second UART.
- I can't connect the OV2640 module directly to the dashboard board, as the M5 Core3 doesn't have all the 20 pins required exposed. The cameras I'm using offer only a Groove port, which I can easily use for I2C and to power up the cam.
Last but not least, I'm curious, and I wanted to experiment with the capability of using I2C for images.
Short video showing the functionalities
Implementation - Camera software
The software is derived directly from the standard CameraWebServer, where I removed the Wi-Fi part and customized on some setting to work without PSRAM. As my display is 320x240 pixels, in order to visualize both cameras, I need 2 images 160x240, which I will display side by side. The OV2640 module doesn't offer this resolution, and I don't want to get more pixels (and bytes) than required and drop them afterward, wasting memory and CPU cycles, so by using the windows function, the OV2640 can provide the required resolution by selecting, in hardware, only a small portion from the original image, the required 160x240, from an offset I can configure (parameters pan & tilt), to select a frame from any point in the image. This is the code:
s->set_res_raw(s, res_id, 0, 0, 0, pan, tilt, 160, 240, 160, 240, true, false);
where res_id can be:
- 1 → 400x300
- 2→ 800x600
- 3→ 1600x1200
as the frame is always 160x240, on the highest resolution I get only 1/10 of the image width, simulating a x10 zoom.
During the boot, on setup function, the camera is set up on the I2C communication as a slave device, with addresses 0x61 and 0x62 respectively, and configure the two call-back functions on the events onReceive and onRequest.onReceive is delegated to receive the commands from the master, and these commands are for sending the frame metadata, especially the number of bytes, or to set some resolution or change the offset of the frame to simulate pan and tilt.
onRequest, which by definition is supposed to be executed when the master requires some data, just set a flag to inform the data is sent.
There is often a misunderstanding of how this function works. In all examples, when this function is called, there is some Wire.write to send the data to the master. The problem here is about the I2C implementation: when this function is called, the I2C device will send immediately all the data already in the buffer and will not wait for any additional data. This means the data we put inside the function will not be sent until the next iteration. This is not what we want. In my implementation, when the function is triggered, I reset a flag to inform the main loop to prepare new data for the buffer. The images we get from the module are around 6KB, and I send these images in batches of 100 bytes, to avoid issues on the I2C bus, requiring around 60 iterations, depending on the size sent with the Metadata. Likely the bus runs at 1Mhz, so the transmission is quite fast.
Configuration
It’s required to configure the type of camera and the I2C pin connected to the Grove port. In my example, I use CAMERA_MODEL_M5STACK_UNITCAM for the Unit Camera DIY Kit and CAMERA_MODEL_M5STACK_WIDE for the M5Camera, by removing the comment on the proper #define line.
// ===================
// Select camera model
// ===================
//#define CAMERA_MODEL_WROVER_KIT // Has PSRAM
//#define CAMERA_MODEL_ESP_EYE // Has PSRAM
//#define CAMERA_MODEL_ESP32S3_EYE // Has PSRAM
//#define CAMERA_MODEL_M5STACK_PSRAM // Has PSRAM
//#define CAMERA_MODEL_M5STACK_V2_PSRAM // M5Camera version B Has PSRAM
#define CAMERA_MODEL_M5STACK_WIDE // M5Camera with PSRAM
// Has PSRAM
//#define CAMERA_MODEL_M5STACK_ESP32CAM // No PSRAM
//#define CAMERA_MODEL_M5STACK_UNITCAM // No PSRAM
//#define CAMERA_MODEL_AI_THINKER
// Has PSRAM
//#define CAMERA_MODEL_TTGO_T_JOURNAL // No PSRAM
//#define CAMERA_MODEL_XIAO_ESP32S3 // Has PSRAM
// ** Espressif Internal Boards **
//#define CAMERA_MODEL_ESP32_CAM_BOARD
//#define CAMERA_MODEL_ESP32S2_CAM_BOARD
//#define CAMERA_MODEL_ESP32S3_CAM_LCD
//#define CAMERA_MODEL_DFRobot_FireBeetle2_ESP32S3 // Has PSRAM
//#define CAMERA_MODEL_DFRobot_Romeo_ESP32S3 // Has PSRAM
For the I2C pins, I set them, along with the slave address and the clock (1Mhz) during the I2C initialization.
Wire.begin(address, 4, 13, 1000000);
//Wire.begin(address, 17, 16, 1000000);
During the code deploy, it's required to use two different addresses, the controller expect 0x61 and 0x62
#define address 0x62
Implementation - Display and controller
This code is quite simple and requires just 200 lines. On every loop:
- Check the I2C controller for a command or button pushed.
- If any, send the command to both cameras and wait 200ms.
- Send request to both cameras to get the image's size of the next frame.
- For every camera, loop on the image's size to fill the respective buffer.
- Draw the Jpeg on the respective position, with a red line in the middle to show the separation.
- Wait for some delay dependent on the resolution to get a new frame.
M5.Lcd.drawJpg(buffer1, len_1, 0, 0);
M5.Lcd.drawJpg(buffer2, len_2, 161, 0);
M5.Lcd.drawFastVLine(160, 0, 240, 0xF800);
delay(delay_frame[res_id]);
On the I2C initialization, I set the speed to 1Mhz, actually the maximum speed we can configure on the ESP32, any higher speed will be lowered to this max value by the Espressif Libraries.
Wire.setClock(1000000);
Wire.begin();
Final notes
I didn't really test this solution in a wheelchair to find the best sensor for the scope, if a wide-angle or a narrow one. The DIY Camera offers both lenses in the box, so it's easy to test. I didn't mention any power source for the M5Stack Core, as most of the electric wheelchairs offer USB ports. Eventually, the controller can be replaced with a custom board with a bigger screen, like a 4" 480x320. That display would allow getting larger frames, or keeping some space for touch controls, considering also the other functions offered by the original project's intercom feature.
Code
https://github.com/ecasti-gmail-com/i2c_esp32_camera/tree/master
Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.
Visit TechForum