I am a software engineer with a background in computer vision, backed by a master's degree and practical machine learning research.
I now work as a camera software engineer. I am involved in kernel-level coding for camera chipsets and managing parts of the image processing pipeline. A key area I handle is clock voting for chipset drivers in a synchronous environment.
Made with:
I developed a computer vision model designed specifically for high-altitude drone imagery using Google Colab for training. The dataset for this project was obtained in collaboration with Stanford University, with all imagery captured on their campus. This work resulted in a publication, one of only a few in this specialized area at the time. Technically, the model is built upon the RetinaNet architecture and uses a feature pyramid network to efficiently pinpoint and extract relevant features from the drone-captured data. Paired with my drone simulation software that accurately maps to real-world coordinates, this combination holds promise for various applications. It can be utilized for tasks such as crowd monitoring, search and rescue operations, and enhancing delivery services.
Made with:
This is an Android based search and rescue drone simulation application, a publish-subscribe model is used to communicate with a server over ROSbridge to obtain useful info such as coordinates, battery life and objectives. Using Google maps API, info is displayed on an XML‐based UI that enables users to control the drones in a realistic simulation. Also, to allow for simultaneity, parallel programming was used to control and display drones using multi‐threading.
Made with:
Web‐ based app which uses Servlets (JSF framework) to authenticate users then retrieve their Facebook images using Facebook’s API. These images are then analyzed using Google’s Cloud Vision API and the returned data is stored in a database. Data feedback from Google cloud vision is stored onto FireStore’s database to optimize API running times. TinEye API (free version of Pinterest) queries relevant postings to the user’s Facebook image based on Google visions AI analysis. The web app is developed and hosted using Google App Engine, however, I also have another version hosted using Firebase. I also set up google analytics for the hosted website.
Made with:
A Ruby on Rails application that implements a photo sharing site. The app uses a MVC design pattern for the User Interface. The website uses a SQLite database to store comments from users. While the images are stored into Amazon’s S3 bucket. The app is deployed onto a Docker Container, and the container is run in Amazon’s AWS EC2 instance.
Made with:
For this game the Unity game engine was used to develop a 2D fighter game. The game features two characters, Hillary and Trump, the skeletons, hit boxes and AI of the characters was coded from scratch. The C# language was used for almost all scripting, including various combat movements as well as sound feedback and winner celebrations.
Made with:
A database intensive software that manages retailers’ inventory, both admin and member users have various functionalities. MySQL is used to manage the data; a user‐friendly UI allows users to perform a variety of actions to the data using self‐built query builder. The software also allows for generation and manipulation of CSV reports which include cost, quantity, retail price...etc.
Made with:
This project extends the work of this research paper. I, along with a team of 3, worked on a PyTorch implementation of the model architecture of the paper for a computer vision project. We ended up facing a lot of troubles with the code base due to lack of documentation and refactoring the code was harder. This allowed for an immersive learning experience of PyTorch, and machine learning models used in this architecture. The obtained model was trained on the Agriculture Vision dataset and uses a mixture of Multi-view Self-Constructing Graph Convolutional Network and an Adaptive Class Weighting Loss. The result is a high accuracy for the field of Agriculture with 68% detection accuracy. Due to the high RAM usage, the model had to be trained on one of the team member’s gaming PC, rather than Google Collaboratory.
Made with:
This an Android app that uses the PyTorch Mobile framework to implement the model obtained from Agricultural Vision Back End. The app allows for segmentation and detection of the 6 mentioned features and uses the model to obtain the results of the displayed images. This app was a way to get introduced into PyTorch Mobile and get familiarized with mobile development for machine learning frameworks.
Made with:
This is an android app that uses the OpenCV library along with other python libraries to implement a Canny Edge Detector. The app offers the ability to disable / enable the detector as well as a threshold slider. The edge detection is happening live on each from of the camera.
Made with:
This is a Jupyter notebook where I hard-coded a clustering algorithm and a dimensionality reduction approach. These machine learning algorithms were used on the digits dataset and I was successfully able to classify the digits in the dataset with a high accuracy.
Made with:
This Jupyter notebook works with the Iris dataset and classifies the different types of flowers using Bayesian Decision Theory, Parametric Methods and a Multivariate Classifier. Different Python libraries were used such as NumPy and Scikit-learn.