Our Technology Stack

At Launchist, we believe in using the right tool for the job, not just following trends or sticking to what’s familiar. We ensure all our developers have full-stack experience to avoid professional deformation and maintain a holistic view of system architecture. This approach allows us to choose the most appropriate solutions for each project’s unique needs. Here’s an overview of key technologies we work with, along with examples of when we find them most appropriate:

Backend Development

  • Python: Our preferred language for building first prototypes (Fast API) that can go into production to test waters. It’s ideal for rapid development, data analysis, and AI/ML integration.
  • Go lang: Perfect for building efficient, concurrent network services and microservices. We leverage Go when performance and low resource usage are critical.
  • Node.js: While we’re transitioning away from Node.js, we have extensive experience with NestJS, particularly for projects requiring TypeScript and GraphQL integration. We continue to support existing Node.js projects.

Databases

  • PostgreSQL: Our go-to for complex, transaction-heavy applications that require robust data integrity. We also leverage its vector storage capabilities for AI-related projects.
  • MongoDB: We use this when flexible, schema-less data storage is needed, particularly for applications with evolving data models. Its vector storage features are also valuable for certain AI applications.
  • Pinecone: Our choice for vector database needs, particularly useful in AI and machine learning projects requiring efficient similarity search.

Cloud & DevOps

  • AWS and Google Cloud Platform: We’re proficient in both, choosing based on specific project needs, existing client infrastructure, or particular service requirements.
  • Docker: Unless it’s a binary, it should be dockerized. We value our time and respect other developers, so we use dev containers.
  • Kubernetes: Essential for managing complex, scalable deployments. We have experience setting up and managing efficient Kubernetes clusters.

API Development

  • RESTful APIs: Still the standard for most web service integrations due to their simplicity and wide support.
  • GraphQL: We implement this when clients need more flexible data querying and to reduce over-fetching of data.
  • gRPC: Used for high-performance, low-latency microservices communication, particularly in polyglot environments (e.g. Go and Python microservices).

Machine Learning & AI

  • PyTorch: Our preferred framework for deep learning projects, offering flexibility and dynamic computational graphs.
  • Spacy: Utilized for advanced natural language processing tasks.
  • Langchain: Employed for building applications with large language models (LLMs).
  • LLM Vendor APIs (Anthropic, OpenAI): Integrated when projects require state-of-the-art language models.
  • Private LLMs: Implemented when there’s a focus on privacy, especially for GDPR compliance.

Our goal is to ensure:

  1. Flexibility to adapt to each project’s unique requirements
  2. Efficient use of resources by choosing the right tool for each job
  3. Ability to leverage cutting-edge technologies when they offer real benefits, not just because they’re trendy

We continuously evaluate new technologies, but we always prioritize stable, well-supported solutions that align with our clients’ long-term interests. Our full-stack approach allows us to see the bigger picture, ensuring that our technology choices contribute to overall system coherence and efficiency.