Poly can search your content in natural language, across a broad range of file types and down to the page, paragraph, pixel, or point in time. We also provide an integrated agent that can take actions on your files such as creating, editing, summarizing, and researching. Any action that you can take, the agent can also take, from renaming, moving, tagging, annotating, and organizing files for you. The agent can also read URLs, youtube links, and can search the web and even download files for you.
Here are some public drives that you can poke around in (note: it doesn’t work in Safari yet—sorry! we’re working on it.)
Every issue of the Whole Earth Catalogue: https://poly.app/shared/whole-earth-catalogues
Archive of old Playstation Manuals: https://poly.app/shared/playstation-manuals-archive
Mini archive of Orson Welles interviews and commercial spots: https://poly.app/shared/orson-welles-archive
Archive of Salvador Dali’s paintings for Alice in Wonderland: https://poly.app/shared/salvador-dali-alice-in-wonderland
To try it out, navigate to one of these public folders and use the agent or search to find things. The demo video above can give you an idea of how the UI roughly works. Select files by clicking on them. Quick view by pressing space. Open the details for any file by pressing cmd + i. You can search from the top middle bar (or press cmd + K), and all searches will use semantic similarity and search within the files. Or use the agent from the bottom right tools menu (or press cmd + ?) and you can ask about the files, have the agent search for you, summarize things, etc.
We decided to build this after launching an early image-gen company back in March 2022, and realizing how painful it was for users to store, manage, and search their libraries, especially in a world of generative media. Despite our service having over 150,000 users at that point, we realized that our true calling was fixing the file browser to make it intelligent, so we shut our service down in 2023 and pivoted to this.
We think Poly will be a great fit for anyone that wants to do useful things with their files, such as summarizing research papers, finding the right media or asset, creating a shareable portfolio, searching for a particular form or document, and producing reports and overviews. Of course, it’s a great way to organize your genAI assets as well. Or just use it to organize notes, links, inspo, etc.
Under the hood, Poly is built on our advanced search model, Polyembed-v1 that natively supports multimodal search across text, documents, spreadsheets, presentations, images, audio, video, PDFs, and more. We allow you to search by phrase, file similarity, color, face, and several other kinds of features. The agent is particularly skilled at using the search, so you can type in something like “find me the last lease agreement I signed” and it can go look for it by searching, reading the first few files, searching again if nothing matches, etc. But the quality of our embed model means it almost always finds the file in the first search.
It works identically across web and desktop, except on desktop it syncs your cloud files to a folder (just like google drive). On the web we use clever caching to enable offline support and file conflict recovery. We’ve taken great pains to make our system faster than your existing file browser, even if you’re using it from a web browser.
File storage plans are currently at: 100GB free tier, paid tier is 2TB at $10/m, and 1c per GB per month on top of the 2TB. We also have rate limits for agent use that vary at different tiers.
We’re excited to expand with many features over the following months, including “virtual files” (store your google docs in Poly), sync from other hosting providers, mobile apps, an MCP ecosystem for the agent, access to web search and deep research modes, offline search, local file support (on desktop), third-party sources (WebDAV, NAS), and a whole lot more.
Our waitlist is now open and we’ll be letting folks in starting today! Sign up at https://poly.app.
We’d also love to hear your thoughts (and concerns) about what we’re building, as we’re early in this journey so your feedback can very much shape the future of our company!
Also, while it looks visually stunning, it is really not helping much introducing the product. I find it rather confusing, really.
The first thing I noticed - when you searched for "upcycling", all results that you have shown hat the exact match keyword "upcyling" in their filename. If this is just a text grep, then the product is redundant: "everything" exists.
In case you really would get results for "upcycling" based on an internal LLM-based summary of disk content, that would be great - but your live demo does not show that.
Also, later down my "experience" you try to show how user input is handled. But as the "experience" is fixed size and can not be zoomed in, it is impossible to read any of the blurry text.
Again, I understand that this "experience" looks impressive. But it is a complete distraction and does not help at all to introduce your product.
I spent 5 minutes looking at this, one of that looking at a loading spinner like it's 2005 again, and I have no clue what your USP is.
And to be honest: Your HN post here also is not helping. Think "Dropbox + NotebookLM + Perplexity" - what? That makes no sense. It is a complete distraction. Explain what your product/service is, what it's USP is, and what value it provides to me.
Instead after a wall of text you are explaining me that one can select files by clicking on them.
So if a search for "fashion upcycling" also returns "case_study_on_how_to_turn_PET_into_clothings.docx", then that's a great USP, and that is what you should point out in your pitch and visualization.
This is because in real life most people already fail at finding good search terms. Many people are not even able to come up with a collection of words to put into the google search box. And the word(s) they may choose could be too specific, or not specific enough. So any kind of "unsharp" search is helping those kind of users.
Feedback:
Supporting an enterprise air gapped solution of this clearly has huge value. It really doesn't matter where the data is stored if the indexing / embedding is happening on your infrastructure.
Enterprises with compliance requirements are quite likely the types of clients looking for ways to save time searching through petabytes of data.
1. We have an embedded agent that can read, edit, organize, and take actions on your files. This means it can read almost any media type, which is why it's "cursor for files" even though "isn't _cursor_, cursor for files?". In other words, Cursor is "Poly for text" :)
2. We provide you with an "IDE", I.e. a file browser. However, unlike Cursor we actually built our engine rather than relying on an existing open source one like VS Code
Lastly, agree about the enterprise solution. All in due time for sure!
I do not find the idea of bringing my data to your search engine attractive. The data should stay put and you should merely index it.
And since LLMs let you generate things, you also have a feature to do that. But who is going to want to create things -- armed with only a text prompt -- while they are searching? Rarely have I wanted to combine images like in your demo. And on the rare occasion I did -- like today! -- I used a GUI to do it in. I suppose creatives would find this feature more.
However, at least for my use-case, this is a very infrequent problem. So, a monthly subscription and the security risk wouldn't be worth it. Though I'm certain there are people who work with files all day and for them, this might be god-send!
One customer type that will absolutely love that are architecture studios. Basically all data they generate (drawing, plans, presentation) live either on SharePoint or NFS (or some other file system like ACC).
If you can provide for them a solution they can host on their private cloud (air-gapped) you have an enterprise-deal.
Why air-gapped - many documents they generate are secret (military or competition. Think next largest tower in the world) - copy right: for example the German norm body DIN does not allow you to use documents from them and add them into an LLM (I'm not talking training, I simply talk RAG) except!!! if you keep it in a private cloud like Azure.
Why do I know that: because I worked on that problem @howie.systems
Is there something in particular that we are vulnerable to that doesn't also affect Google Drive, Dropbox, iCloud Drive, OneDrive, etc.?
With Google Drive, I choose which files to upload. It doesn't have broad access to everything on my computer.
Dropbox, iCloud, and OneDrive are just backup services, so in theory they could just back up your files as an encrypted blob and have no way to read them. Unfortunately, they don't encrypt them (which is partly why I don't use those services). But at least I have their "promise" that they won't read or analyze my files, which would make me feel better even if its a weak promise.
On the other hand, your service, by nature, is reading an analyzing all of my files using a remote server.
I don't know about the other services, but Dropbox _does_ read your files. https://help.dropbox.com/security/privacy-policy-faq
> We may build models that identify keywords and topics from a given document. These models may be trained on your documents and metadata, and power features within Dropbox such as improved search relevance, auto-sorting and organization features, and document summaries.
This app gives me the same heebie-jeebies as the "Warp" terminal that was heavily pushed (and then rebuked) on HN. I don't want to replace my file browser or terminal with a subscription service, full-stop. The most magical featureset on the market won't move my needle, but then again maybe I'm not the ideal customer for this kind of product.
(uses custom algorithm instead of embeddings. Not as powerful in terms of contextual understanding, but enough for search and a LOT faster, plus minimal resource usage)
I am curious to know if your agent can connect to a user’s custom MCP server in order to make tool/calls with files attached. For example I have a custom MCP server that catalogs the images which are submitted in the RPS 2.0 payload. I have a python wrapper script that convert’s Cursor’s stdio MCP interface to an http request, inserting htaccess credentials and scanning the tool call args for any files that it needs to attach to the payload. This lets me set up the wrapper script as an MCP server in the cursor settings. This lets the built in Auto-mode agents do some similar kinds of tasks like you are describing.
With that said, it sounds like your system has better agents and tools for working with things besides just text files, so I am wondering if perhaps your agentic tools might be a good fit for a situation where for example I have a messy filesystem full of images that may already be duplicates in the database. If I provide the MCP tools into my database, can your system and agents use tools that I define? Does your system have ability to attach access credentials if/when making calls to third party mcp/tools?
we experimented with something like this: https://local-note-taker.vercel.app/chat
repo in case you'd rather run it locally: https://github.com/tambo-ai/local-note-taker
Curious to compare how our two ideas are built differently, feel free to reach out (email in my profile)
Do y'all have a solution for this here? Some kind of safety layer of some kind to easily review and optionally revert actions the agent has taken across my entire file system?
The search part of this is cool, but if the write side is solved, shut up and take my money.
Stupid, not good, unsatisfying reasons I agree!
Or if I work in finance, or healthcare, or law, or government, or a hardware design company, I don’t want my files leaving my network. Those are very important use cases, much more important than searching my personal laptop. I want this for WORK, not my little photo collection or notes or whatever.
This is a great use case for modern LLM/embedding models but gotta be local to be actually useful in the places where it’s most needed.
Feedback: this site crashes my Safari (Sequoia) reliably, so I can't see the demo.
Maybe I'm too old, but after I read the post I thought -- oh this is an "AI-first Quicksilver" -- who remembers that plugin for Mac? I don't think they stayed relevant enough
We don't use transcription or any post processing. We simply embed the file. Our embedding has an additional inner dimension to support long duration content. So it's [N x D] where D is the embed dimension and N is an internal dimension that varies on the content.
Only way I would ever use something like this is with a local/self-host model that I run myself on my own hardware, with meticulous control over what the thing can access on the internet.
Here's what we always tell founders about demo videos: "What works well for HN is raw and direct, with zero production values. Skip any introductions and jump straight into showing your product doing what it does best. Voiceover is good, but no logos or music!"