> How does Logflare's approach contrast with other entrants like /99 who are leveraging blob stores (Cloudflare R2) for storage and serverless for querying for lower costs? > Can you point out ways in which Logflare uses it that makes it so (for ex, is it tiered-storage with a BQ front-end)?Īfter 3 months BigQuery storage ends up being about half the cost of object storage if you use partitioned tables and don't edit the data. In the meantime, I've found LogTail to be a pretty good alternative, but they're limited to 30 days of retention even on the highest tier plan. I hope they continue iterating because it has potential. That's nice for customers like me who want low-volume, high retention. I do like that Logflare/Supabase let you bring your own BigQuery. When the response came, they just said that it was designed for me but didn't address any of the issues I brought up. I was on a paid plan, but I still had to wait 3 business days for a response. I emailed support to ask if I was misunderstanding how to use Logflare or if it was just designed for a different use case. This is an annoying obstacle if you don't care about graphing the results and are just trying to diagnose an issue in your logs. * When you do adjust the query to a larger time window, the query will fail because it can't generate a graph unless you also adjust the group_by in your query to match the new time window's limits. If you want to see more results, you need to rewrite your query rather than just scroll up or hit a "load more" button. * Search is limited to a maximum of 100 results. For example, if I search "catastrophic error" I want to see the log lines leading up to that, not just that specific line. Usually, what I want to see is the log line in context. api/auth) like you can with other log analytics. * To search your logs, you need to write a SQL-like query in LogFlare's DSL. I was excited about LogFlare because it supports unlimited retention, but I ran into too many issues and had to cancel my subscription: So regardless of how much you're logging, they throw away your logs within somewhere between 7-30 days unless you're on an insane Enterprise plan. Most logging solutions set retention based on time rather than data size. But I still want to have all my logs searchable in one place. I have a few small apps that generate a few MB of logs per month, so basically nothing. I've been looking for a log solution that's good for the use case of high retention but low volume. Supabase Logs / Logflare seems primarily interested in creating graphs from logs rather than using logs for diagnostic purposes. I tried LogFlare (which is now Supabase Logs) in January, but it didn't work well for what I wanted. This allows Supabase to expose all logging data to users and perform joins and filters within the Logs Explorer to their hearts’ content. Logflare performs optimisations for these, performing table partitioning and caching to make sure costs are kept low. This incurs both querying costs and additional complexity in the sense that your storage mechanism must be able to handle such complex queries without breaking the bank. Storing logs isn’t enough, you need to effectively debug and aggregate your logs for insights. We’ve handled over 10x average load for ingestion spikes and Logflare just chugs along. Logflare is built on the BEAM and can handle high loads without breaking a sweat. The last thing that you would want is for your application to take high load and go down, but you’re unable to debug it because the high traffic led to high log load and subsequently took down your o11y server. Logging is quite expensive, and the way that Logflare leverages BQ (and in the future, other OLAP engines) cuts down storage costs greatly When it comes to a logging server/service, you’d consider the following factors: Their main product is also much more geared towards visualisation as opposed to bare log analysis. Splunk’s core is not open source, and is very much geared to large contract enterprise customers. For Supabase, this functionality is taken over by the Supabase Studio, and the reporting capabilities will eventually converge to compete to match APM services like sentry etc. Kibana is the visualisation layer of the elastic stack. It is out of scope for Logflare, which focuses on acting as a centralised server to point all your logging pipelines to. Logstash is the transport and transformation portion (along with Filbert) in the elastic stack, and it performs the same functions as vector. To directly address some of the tools you mentioned: Hi I’m one of the logflare devs and I work on observability at Supabase.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |