Carlos Martinez

Carlos Martinez

Mentor
Rising Codementor
US$0.00
For every 15 mins
ABOUT ME

I worked as a Software Engineer and as a Data Engineer on a Data Platform team at a company that stored roughly 10% of all electronic invoices in Brazil. That was billions of records in our databases, and everyday it received tens of millions more. My team was responsible for providing APIs for other engineering teams, both to ingest new records into our platform and to retrieve records based on various filters. These APIs enabled the engineering teams to develop new features for end-users without worrying about business logic, which database the data was stored in, data migrations, data consistency, etc.

As a software engineer, I've developed event-driven microservices for real-time data processing and HTTP APIs. My day to day tasks included:

  • Doing discoveries to find out what the problem is and how to solve it
  • Designing the solution
  • Creating the tasks in our backlog
  • Build the systems with unit and integration tests
  • Make the CI-CD pipeline
  • Create alerts and monitor the system after it is deployed to production

As a data engineer, I've built batch pipelines to migrate billions of records from one database to another in just a few hours and I would also maintain several Airflow DAGs that triggered ETL pipelines that populated data marts in our data warehouse. I was also Data Engineering Chapter Leader, responsible for tutoring junior data engineers, creating data engineering trainings, diffusing good practices and standards for creating pipelines and organize meetings to discuss technology and tools within the Data Engineering Scope.

More recently I was being trained to become a Tech Lead. I took over my Tech Lead's responsabilites when he went on vacation for 20 days, which included talking to our manager to decide which projects to prioritize, help the team plan the weekly tasks, resolve major incidents and plan the next quarter roadmap.

Brasilia (-03:00)
Joined November 2023
EXPERTISE
2 years experience
3 years experience
2 years experience
3 years experience
3 years experience
3 years experience

REVIEWS FROM CLIENTS

Carlos's profile has been carefully vetted and approved as a Codementor. Connect with Carlos now, and leave a review for them once you're done!
SOCIAL PRESENCE
GitHub
go-data-platform
[WIP] A data platform written in Golang
Go
0
0
database-access-with-docker
Golangs tutorial 'Accessing a relational database' using docker
Go
0
0
EMPLOYMENTS
CEO
Post Favorito
2023-03-01-2023-10-01

• Responsible for a team of three people and several contractors; the company provides a platform with editable and ready to use conte...

• Responsible for a team of three people and several contractors; the company provides a platform with editable and ready to use content for psychologists to post on their Instagram. Our product return rate is less than 2%.
• Managed two projects that replaced several products with only one that was simpler, more intuitive to use and that increased in 50% the company return on investment.
• Created and optimized all social media strategy campaigns on Facebook Ads.
• Wrote the copy for all product launches including e-mails, Whatsapp/Telegram messages, and social media posts. Sold 56% more than expected on the first day and 30% more on the first week.

Project management
Facebook Advertising
Financial Analyst
View more
Project management
Facebook Advertising
Financial Analyst
View more
Data Engineer/Software Engineer
Arquivei
2021-01-01-2023-03-01

• Developed distributed event-driven microservices for real-time processing large volumes of data; these workers backed the most criti...

• Developed distributed event-driven microservices for real-time processing large volumes of data; these workers backed the most critical services of the company and handled 10 million records each per day. We believe in the motto “you build it, your unit, you own it”: all systems were monitored, had dashboards, alerts and a CI/CD pipeline.
• Built batch pipelines to migrate data between databases with billions of records within 5 days, including data extraction from different sources, data transformation, data write, data audit and new system up and running.
• Developed HTTP APIs to retrieve data from internal databases and serve it to stream align teams. The mission of my team, the Electronic Invoice Platform, was to enable other engineering teams to develop new features for the end user without the need to worry about business logic regarding the different types of documents or the technology needed to store or retrieve them.
• Devised and developed an internal library in Go to be used in our workers development to read and write from Bigtable, encode and decode data in Wire Format/Avro, generate Change Data Capture, and send events to Kafka. This library reduced in 75% the time to develop, test and deploy a new system.
• Provided maintenance for batch and streaming pipelines, modeled data from different storages and data marts, orchestrated hourly, daily and weekly DAGs for ETL pipelines.
• Data Engineering Chapter leader, a position that impacts all data engineers, analytics engineers, BI engineers and some software engineers in the company. My responsibilities included tutoring Junior Data Engineers, creating data engineering trainings, diffusing good practices and standards for batch pipelines, and organize recurrent meetings to discuss technology and tools within the Data Engineering scope.

Scala
PostgreSQL
Google BigQuery
View more
Scala
PostgreSQL
Google BigQuery
Docker
ETL
Apache Kafka
Kubernetes
Google cloud storage
Grafana
Bigtable
Go (Golang)
Apache Beam
Apache Airflow
Dataflow
Google Dataflow
View more
Software Engineer
CSDBR
2020-01-01-2020-12-01

• Developed microservices responsible for different operations such as recording transactions, pricing and generating reports that wer...

• Developed microservices responsible for different operations such as recording transactions, pricing and generating reports that were able to ensure millions of transactions per day using distributed computing.
• Acted as Team Leader during a six months project to create a new product that consisted of several microservices. Dealt with stakeholders and analysts to define the project scope and its deadlines, created and prioritized backlog tasks, responsible for conducting team dailies and developing the new system alongside other team members.
• Worked on replacing a Python API for a Clojure API that was able to handle five times more requests per second.
• Instructed and did the technical onboarding of new employees introducing the system architecture, functional programming, test driven development and event sourcing.

Python
Cassandra
Docker
View more
Python
Cassandra
Docker
Clojure
Apache Kafka
View more
PROJECTS
Go Data PlatformView Project
2023
This data platform has two applications: Ingestor - A worker that consumes inputs from a Kafka topic, decodes them, and reads previous en...
This data platform has two applications: Ingestor - A worker that consumes inputs from a Kafka topic, decodes them, and reads previous entries from the database. If there is not previous data, it persists the record in the database; if there is previous data but its version is greater than the input one, it discards the input; if there is previous data but its version is less than the input one, it updates the record in the database. Retriever - A HTTP API that receives a GET request, applies the filter in the query parameter, access the database and returns the data in JSON.
MySQL
Go (Golang)
Kafka
View more
MySQL
Go (Golang)
Kafka
View more
Stateful processor
2022
I've worked on a Data Platform team at a company that stored roughly 10% of all electronic invoices in Brazil (billions of documents), re...
I've worked on a Data Platform team at a company that stored roughly 10% of all electronic invoices in Brazil (billions of documents), receiving hundreds of millions of new ones every month. My team was responsible for providing different APIs for other engineering teams, both to ingest new documents into our platform and to retrieve documents based on various filters. This enabled the engineering teams to develop new features for end-users without worrying about business logic, which database the data was stored in, data migrations, etc. The core system of our data platform was called stateful processor. It was a microservice that read a valid input from Kafka, retrieved the message schema from the schema registry, decoded the Avro input and check the database for a previous entry for the same document. If there was a previous entry but the incoming data was older than what was already persisted, the input was discared and a Kafka event was created. If there was not previous entry, or if the incoming data was newer, a new entry in the database (Google Bigtable) was made, a Kafka event was created, and the Change Data Capture (CDC) for the database was also created and published into Kafka. The CDC made possible for other services to replicate the change in other databases, such as Google BigQuery and ElasticSearch, which served different APIs. After implementing the same kind of stateful processor for two different types of electronic invoices, I developed a mental model of what was replicable from one system to another and what wasn't. With this model, me and my team created a library in Go that handled the routine processes among all processors (read, decode, persist and write). The business logic that differed from one document to another was simply declared as a new service. This new library reduced the implementation, testing and deployment time of new microservices from four months to one month.
Google Cloud Platform
Go (Golang)
Kafka
View more
Google Cloud Platform
Go (Golang)
Kafka
View more