Across the globe, a remarkable community of 2.5 million beekeepers engage in the annual springtime ritual of locating the queen bee within their beehives. Given the enormous population in a beehive, which can range from 10, 000 to 60, 000 bees, this endeavor can be a laborious one as there is but a single queen amongst them.
Consider an average beekeeper who tends to have approximately 20 beehives. The hunt for a queen in a single hive could devour huge amount of their time. Extrapolating this, the diligent beekeeper may find themselves investing an incredible 60 hours in this single pursuit each spring. And let's not forget the beekeepers who nurture a significantly larger collection of hives; for them, this quest is a major investment of time.
This is where our innovative project, Hive Vision, comes into play. Initially, we were uncertain of the feasibility of our solution as no concrete methods to identify the exact location of a queen in a hive existed. Our breakthrough idea was to harness the power of technology, specifically, cameras and machine learning.
Our approach involves the careful deployment of cameras onto each honeycomb. These devices, empowered by machine learning algorithms, are trained to detect the presence of the queen. The queen, distinguishable by her unique mark - a miniature color plate, can be identified amidst the sea of bees.
Machine LearningThe honeycomb was scanned utilizing a combination of a camera and an Arduino Nano 33 BLE Sense. To enable Hive vision, a machine learning model was developed using the Edge Impulse platform. The model was then seamlessly integrated into Arduino-compatible code.
The initial phase involved capturing images that featured a designated queen bee and the surrounding background, which included other bees. These acquired images were subsequently uploaded and meticulously annotated as follows:
- Dot: Representing the queen bee
- Background: Encompassing other bees and the honeycomb itself
Subsequently, an impulse with two distinct output features, namely "dot" and "background, " was meticulously crafted, and the model's learning process commenced. Although our model achieved a modest accuracy of 50% on the Edge Impulse platform, it did not produce the optimal outcome. However, let us consider the possibility that inadequate lighting conditions during the training image capture might have contributed to this suboptimal performance. This issue could be effectively addressed by ensuring better illumination when acquiring the images. The ensuing section provides an overview of the outcomes yielded by our training model.
The machine learning model was procured as an Arduino library, subsequently installed within the ArduinoIDE libraries for seamless integration.
Arduino firmwareTo initiate our project, we utilized the Arduino library generated through the Edge Impulse platform. The Arduino Nano BLE 33 Sense, equipped with a built-in Bluetooth module, served as our hardware platform. In order to establish a Bluetooth connection, we incorporated code that leverages the ArduinoBLE library. Additionally, we integrated red and green LED indicators on the Arduino board. The green LED signifies the detection of a queen bee, while the red LED indicates the absence of a detected queen.
Android AppIn Arduino firmware section mentioned LED lights are located on Arduino Nano board. So we developed our own phone application with bluetooth conection.
ImprovementsAs previously mentioned in the machine learning section, to improve the quality of image capture for the honeycomb, it is crucial to enhance the lighting conditions. To achieve this, it would be necessary to design and construct a housing for the device that incorporates an integrated lighting system. This entails implementing a suitable light source within the housing to ensure optimal illumination during the picture-taking process.
Comments