← Back

Building Mem: The AI Notes App (feat. Pinecone)

Mar 19, 2024

Last week, we held an in-person event at Mem HQ where Kevin (CEO @ Mem) and Ram (CTO @ Pinecone) came together to share their thoughts on building an AI-first notes app and how Mem leverages cutting-edge technology along with Pinecone's serverless infrastructure. Here are a few of our key takeaways:

From Kevin

  1. ​The infrastructure under the hood that powers Mem's Related Notes, Smart Search, and AI Chat are multi-step systems that leverage LLMs, vector search technology, and real-time data processing to deliver a highly personalized user experience.

  2. ​In Mem's indexing pipeline, we make notes keyword searchable through Elasticsearch and semantically searchable through Pinecone, while handling note edits in an efficient and scalable way.

  3. ​In Mem's retrieval pipeline, we enable you to search the way you think with a hybrid approach combining keyword matches, semantic similarity searches, and deterministic retrieval.

  4. ​In Mem's Chat pipeline, we let you interact with your knowledge by leveraging a LLM to understand user queries, search across content, and generate contextually relevant responses.

  5. ​Mem is hiring! Join us on our mission to augment human intelligence by helping people do more with their thoughts.

From Ram

  1. ​The main challenges developers face in working with LLMs center around context window limitations, content attribution, and privacy.

  2. ​Vector databases need to scale at a fair cost without sacrificing freshness.

  3. ​"Traditional" vector databases suffer from lack of elasticity, bloated costs, and poor UX. Pinecone cuts across these issues and lets you store all you want, while paying only for what you searched.

​Let us know if you had any other learnings! Thank you all for a lively evening of discussion.

← Back

Building Mem: The AI Notes App (feat. Pinecone)

Mar 19, 2024

Last week, we held an in-person event at Mem HQ where Kevin (CEO @ Mem) and Ram (CTO @ Pinecone) came together to share their thoughts on building an AI-first notes app and how Mem leverages cutting-edge technology along with Pinecone's serverless infrastructure. Here are a few of our key takeaways:

From Kevin

  1. ​The infrastructure under the hood that powers Mem's Related Notes, Smart Search, and AI Chat are multi-step systems that leverage LLMs, vector search technology, and real-time data processing to deliver a highly personalized user experience.

  2. ​In Mem's indexing pipeline, we make notes keyword searchable through Elasticsearch and semantically searchable through Pinecone, while handling note edits in an efficient and scalable way.

  3. ​In Mem's retrieval pipeline, we enable you to search the way you think with a hybrid approach combining keyword matches, semantic similarity searches, and deterministic retrieval.

  4. ​In Mem's Chat pipeline, we let you interact with your knowledge by leveraging a LLM to understand user queries, search across content, and generate contextually relevant responses.

  5. ​Mem is hiring! Join us on our mission to augment human intelligence by helping people do more with their thoughts.

From Ram

  1. ​The main challenges developers face in working with LLMs center around context window limitations, content attribution, and privacy.

  2. ​Vector databases need to scale at a fair cost without sacrificing freshness.

  3. ​"Traditional" vector databases suffer from lack of elasticity, bloated costs, and poor UX. Pinecone cuts across these issues and lets you store all you want, while paying only for what you searched.

​Let us know if you had any other learnings! Thank you all for a lively evening of discussion.

← Back

Building Mem: The AI Notes App (feat. Pinecone)

Mar 19, 2024

Last week, we held an in-person event at Mem HQ where Kevin (CEO @ Mem) and Ram (CTO @ Pinecone) came together to share their thoughts on building an AI-first notes app and how Mem leverages cutting-edge technology along with Pinecone's serverless infrastructure. Here are a few of our key takeaways:

From Kevin

  1. ​The infrastructure under the hood that powers Mem's Related Notes, Smart Search, and AI Chat are multi-step systems that leverage LLMs, vector search technology, and real-time data processing to deliver a highly personalized user experience.

  2. ​In Mem's indexing pipeline, we make notes keyword searchable through Elasticsearch and semantically searchable through Pinecone, while handling note edits in an efficient and scalable way.

  3. ​In Mem's retrieval pipeline, we enable you to search the way you think with a hybrid approach combining keyword matches, semantic similarity searches, and deterministic retrieval.

  4. ​In Mem's Chat pipeline, we let you interact with your knowledge by leveraging a LLM to understand user queries, search across content, and generate contextually relevant responses.

  5. ​Mem is hiring! Join us on our mission to augment human intelligence by helping people do more with their thoughts.

From Ram

  1. ​The main challenges developers face in working with LLMs center around context window limitations, content attribution, and privacy.

  2. ​Vector databases need to scale at a fair cost without sacrificing freshness.

  3. ​"Traditional" vector databases suffer from lack of elasticity, bloated costs, and poor UX. Pinecone cuts across these issues and lets you store all you want, while paying only for what you searched.

​Let us know if you had any other learnings! Thank you all for a lively evening of discussion.