software engineer with 4 years of experience designing, implementing, and maintaining resilient, scalable, reliable data architecture and platforms, with two master's degrees, one focused on big data and the other on AI. My CV highlights my achievements in my previous roles. I think we can make a very good match for this position.
cheers.
As Chief Technology Officer I am responsible for the whole tech stack for a music distribution platform, managing 6 squads, fron...
As Chief Technology Officer I am responsible for the whole tech stack for a music distribution platform, managing 6 squads, front-end, UX-UI, Back-end, including data squad, Testing, Blockchain, DevOps. I have created a fast, scalable, reliable, and secure architecture for all the processes inside the platform using an event-driven architecture with microservices and Kafka, exposing rest APIs to interact with the front-end everything hosted into AWS.
As tech lead I was responsible of decide and design the data architecture to process the big data files. I decided to use paralle...
As tech lead I was responsible of decide and design the data architecture to process the big data files. I decided to use parallelism and validate, clean, and transform the data with pandas, finally loading the processed data to a PG DB supporting scalability, high performance, and 0 downtime.
As Data Engineer Lead, I was responsible for designing, implementing, and maintaining a whole big-data environment resilient, reli...
As Data Engineer Lead, I was responsible for designing, implementing, and maintaining a whole big-data environment resilient, reliable, and scalable for this last-mile delivery company, providing solution for more than 36 companies, like Oxxo, Walmart, Exito, and others. The company offered 5 different services that each single company could choose or not. Each single service produced a huge amount of data that had to be consumed, normalized, cleaned, and loaded into one data mart for each company, and a principal datalake (Big Query). The stack used was Python, pandas, concurrence, and event-driven architecture with micro-services and docker, running over cluster, services, and tasks hosted in AWS, connected to a RabbitMQ instance, and managed using Terraform. This ETL on average could consume, process, and load around 9.000.000 registers every 3 minutes. Also, I used Airflow for no complex tasks and specific report extraction. As a BI tool, I selected a modified version of the open-source Re-dash.