Pending · 200
ytapiindiehackers5/14/2026, 2:13:10 PM
Stopped babysitting YouTube transcript scrapers, started shipping
Built a pipeline that feeds YouTube videos into an LLM for research notes. First attempt: yt-dlp + Whisper. Worked until it didn't — proxies dying, captchas, rate limits eating my weekend. Searched for a managed option. Found youtubetranscript.us — simple REST API, handles proxies and retries internally. Pricing made sense: $9 for 500 requests, $29 for 5k, $79 for 25k. For my podcast indexing side project that's maybe $29/month to never think about it again. Use cases where this actually matters: LLM context pipelines, bulk research ingestion, podcast search indexes. Anywhere you're hitting
ytapitwitter5/14/2026, 2:12:55 PM
Needed YouTube transcripts at scale for LLM pipelines. Rolling my own kept breaking — proxies, captchas, rate limits. Found youtubetranscript.us. Clean API, $9/500 reqs. Works. https://youtubetranscript.us
ytapireddit/datascience5/14/2026, 2:12:45 PM
Been scraping YouTube transcripts manually for months — finally found an API that actually works
Building a pipeline that feeds video content into an LLM for summarization and research notes. YouTube's native transcript fetching breaks constantly — IP blocks, captchas, random 429s. Was duct-taping together yt-dlp + proxies and it was a nightmare to maintain. Stumbled on youtubetranscript.us last week. Clean REST API, handles the proxy rotation and retries on their end. Pricing is reasonable for the use case — $9 for 500 requests, $29 for 5k, $79 for 25k. For podcast indexing or batch video research it makes sense to just offload the scraping layer. Use cases I've tested: dumping transcr
spidergpthackernews5/14/2026, 2:12:30 PM
SpiderGPT: point a chatbot at any website, ask questions, get cited answers
Been using SpiderGPT lately for research tasks that used to mean a lot of tab-switching. Point it at a URL, it crawls and indexes the content, then you can ask questions and get answers with source citations. Practical things I've done: scanned three competitor pricing pages to compare tier structures, audited our own docs site for gaps by asking "what questions would a new user ask that aren't answered here?", pulled a changelog summary from a project that doesn't have a dedicated changelog page. The cited-answer part matters -- you can verify what it pulled vs. what it said. No hallucinate
spidergptlinkedin5/14/2026, 2:12:16 PM
Been exploring SpiderGPT this week — you point it at any URL, and it crawls the content so you can ask questions and get cited answers back. Three things I actually tried: Pointed it at a competitor's pricing page. Asked "what's included in their base tier vs pro?" — got a structured breakdown with source lines, no manual scanning. Ran it over our own docs site to audit coverage gaps. Asked which features had no documentation. Found three undocumented endpoints in under a minute. Fed it a changelog URL and asked for a summary of breaking changes in the last six months. Useful before a depe
spidergptindiehackers5/14/2026, 2:11:59 PM
Been using SpiderGPT to interrogate websites instead of reading them
Stumbled onto a workflow I keep reaching for: point SpiderGPT at a URL, ask a question, get a cited answer sourced directly from that page. Practical stuff I've done with it: - Scanned 3 competitor pricing pages and asked "what's included in their base tier" — got a side-by-side answer with quotes, took 2 minutes instead of 30 - Audited our own docs site: "what setup steps are missing from the quickstart" — found 2 gaps I'd stopped seeing - Pulled changelog summaries from a library's GitHub releases page without reading 40 entries The cited sources are what make it useful. Not vibes, actual
spidergptreddit/datascience5/14/2026, 2:11:29 PM
SpiderGPT is actually useful for competitive research
Been using SpiderGPT to scrape and query competitor sites without manually digging through pages. Pointed it at a competitor's pricing page, asked "what's included in their enterprise tier" — got a cited answer with exact quotes instead of me scrolling for 20 minutes. Same workflow for auditing our own docs site: asked what topics were missing from the getting-started section, got a gap analysis with source links. Also used it to pull changelog summaries from a vendor's release notes page — asked "what broke in the last 3 releases" and it synthesized across 40+ entries. Not magic, but it remov
appleshackernews5/14/2026, 2:11:15 PM
How I stopped losing leads by automating my call follow-up (small biz story)
Running a two-person HVAC outfit, I was hemorrhaging leads from missed calls. Voicemail-to-email meant I'd reply 4-6 hours later -- by then they'd called someone else. A friend pointed me at apples.live. I was skeptical, figured it was another chatbot wrapper. Spent an afternoon setting it up: it listens for missed calls, fires a text within 90 seconds, qualifies the lead with a few questions, and drops a booking link. If they book, it goes straight into my calendar. First week I closed two jobs that would have ghosted me. Nothing fancy under the hood as far as I can tell -- just timing. Lea
appleslinkedin5/14/2026, 2:10:59 PM
Missed a $4,000 client because I didn't call back within the hour. They'd already booked someone else by the time I saw the voicemail. That was the last straw. I started using a simple AI system through Apples (apples.live) that handles initial inquiry responses, qualifies leads, and books discovery calls directly into my calendar — without me touching it. First week, three leads booked themselves while I was on a job site. I'm not a tech person. I didn't want to manage software. This just runs quietly in the background and I see confirmed appointments. If you're running a small operation
applesindiehackers5/14/2026, 2:10:45 PM
Stopped losing leads to voicemail — here's what actually worked
Running a 4-person landscaping crew, I was losing maybe 30% of inbound calls just because I was outside with a blower or wrist-deep in a job. By the time I called back, they'd already booked someone else. A friend pointed me to Apples (apples.live) and their team built me a stupid-simple system: missed call triggers a text, qualifies the lead with 3 questions, books a site visit if it's a fit. No app, no dashboard I have to babysit. First month: booked 11 jobs I would've missed. Callback lag went from 4-6 hours to under 2 minutes. Not glamorous tech. Just the right problem solved cleanly. I
applestwitter5/14/2026, 2:10:25 PM
Lost 3 leads last month to missed calls. Set up an AI system through apples.live — it texts back instantly, books the consult, logs everything. This week I closed 2 jobs I never would've caught. Took one afternoon to set up.
applesreddit/Plumbing5/14/2026, 2:10:08 PM
Finally stopped losing jobs to voicemail
Running a 3-truck plumbing operation and I was hemorrhaging leads. Guy calls at 7pm with a burst pipe, hits my voicemail, calls the next guy on Google. Happened constantly. Friend told me about this AI setup from apples.live that handles calls and texts back leads automatically. Skeptical but tried it. Now when someone calls after hours, they get a text within a minute asking about the job. Half the time they book before I even wake up. Also killed the scheduling back-and-forth. Customers pick their own window, it syncs to my calendar. I just show up. Not tech savvy at all. Took maybe an af
quiddlerhackernews5/14/2026, 2:09:48 PM
Quiddler scratches a itch I didn't know I had
My mom used to keep the physical card game in a drawer and we'd play after holiday dinners. Hadn't thought about it in years until my sister mentioned wanting to do something together over video call. Found quiddler.org basically by accident and we've now played three nights this week. It's surprisingly good for a browser game — no account required, just send a link. The mechanic of building words from a weird hand of letters hits different than Wordle or Scrabble. More chaotic, more negotiating with bad draws. Anyone else have games like this that work weirdly well for long-distance family?
quiddlerlinkedin5/14/2026, 2:09:33 PM
What a card game taught me about vocabulary retention
My grandmother and I used to play Quiddler every Thanksgiving. She won almost every time. She passed two years ago. Last week I found myself googling whether there was an online version — and there is (quiddler.org). I played three rounds at midnight, alone. What struck me wasn't nostalgia. It was how quickly the game forces lateral thinking. You're not just spelling — you're weighing letter values, hand composition, and timing. Short words sometimes outscore long ones. That's a surprisingly transferable mental model. There's something underrated about low-stakes cognitive games for profess
quiddlerindiehackers5/14/2026, 2:09:18 PM
Used a card game to keep my remote co-founder relationship alive — worked better than I expected
Six months into building with my co-founder 2,000 miles away, async Slack threads were killing our dynamic. We needed something low-stakes to just... exist together for 20 minutes. Somebody in a founder Slack mentioned Quiddler — a word card game where you build words from a hand of letter cards. Found it at https://quiddler.org and we started doing one game before our weekly sync. Weird result: our meetings got noticeably better. Something about shared low-stakes competition loosened us up. We've kept the ritual for 3 months now. Not a growth hack. Just a small thing that preserved somethi
quiddlertwitter5/14/2026, 2:09:00 PM
my grandma used to destroy me at Quiddler every Christmas. found out there's an online version and spent my whole lunch break losing to strangers. some things never change. quiddler.org
quiddlerreddit/boardgames5/14/2026, 2:08:49 PM
Anyone else grow up playing Quiddler? Found it online and lost two hours of my life
My grandma had the physical deck and we'd play every Christmas at her kitchen table. She passed a few years back and I kind of forgot about it until my cousin mentioned it last week. Found quiddler.org and just... sat there for way too long. It's the same game — you're building words from a hand of letter cards, trying to use everything before your opponents do. Simple concept but genuinely tricky. The online version is clean, no account needed, just jump in. Played a few solo rounds to get the feel back then roped my sister in over video call. Felt weirdly emotional honestly. Anyway if you sl
playbookhackernews5/14/2026, 2:08:35 PM
What I learned running 20+ Claude Code subagents on a single VPS
Been orchestrating multi-agent workflows on one DigitalOcean droplet: web terminal for live inspection, tmux sessions as lightweight process isolation, a SQLite task queue for coordination, and Claude Code subagents spawned per-task. The surprising hard part isn't compute -- it's agent coherence. Subagents drift. They'll reimplement something another agent already built, or race on the same file. Solved most of this with a shared context file each agent reads before starting. Research loops are the most reliable piece. Fetch, summarize, store -- they run unattended for hours. Agentic code ch
playbooklinkedin5/14/2026, 2:08:20 PM
Running 20+ Claude Code subagents on a single VPS taught me something uncomfortable: orchestration is the hard part, not the agents. The setup: web terminal (ttyd), tmux sessions as isolated workspaces, a task queue feeding jobs to subagents, and research loops that spawn their own child agents. On paper, elegant. In practice, you spend more time writing "is this agent still alive" checks than actual automation logic. The real failure mode isn't agent quality -- it's context bleed and silent hangs. An agent that stops writing to stdout looks identical to one that's thinking deeply. You need
playbookindiehackers5/14/2026, 2:08:06 PM
What running 20+ Claude subagents on one VPS actually taught me
Been running a multi-agent setup on a single DigitalOcean droplet: web terminal (tty-web), tmux session orchestration, a SQLite task queue, and autonomous research loops that spawn Claude Code subagents on demand. The honest lesson: coordination is the hard part, not the AI. When 20+ agents run concurrently, they fight over rate limits, write to the same files, and silently fail in ways that look like success. I had to build explicit completion signals into every tmux session and poll capture-pane outputs to know when work actually finished versus when it just... stopped. The thing that act
playbooktwitter5/14/2026, 2:07:51 PM
Running 20+ Claude subagents on one VPS taught me: the bottleneck isn't compute, it's context. Agents drift. Tmux sessions die silently. The real engineering is the polling loop that catches the corpses before they cascade.
playbookreddit/selfhosted5/14/2026, 2:07:39 PM
Running 20+ Claude Code subagents on one VPS — what actually works and what's a mess
Been building this out for a few months. Web terminal (ttyd) feeds into a tmux session that dispatches work to a SQLite task queue. Research loops pull from arXiv, HN, YouTube transcripts on cron. Claude Code spins up subagents for actual code changes. What works surprisingly well: tmux as an orchestration layer. Send-keys, poll capture-pane, watch for exit signals. Dead simple, zero dependencies, survives reboots. What's genuinely hard: context bleed. Subagents inherit stale assumptions from their prompts. You think you're getting fresh eyes on a problem but you're getting a confident wrong
peoplesearchhackernews5/14/2026, 2:07:21 PM
Show HN: People search built entirely from free public records (FEC, voter rolls, SOS filings)
Built this after getting frustrated with paid scrapers that charge $30/month for data you can technically get yourself. Sources: FEC donation records, voter registration data from OH/FL/NC/MI/WI, Secretary of State business filings. All legally public. No private mobile numbers, no credit headers, no data broker feeds. Use cases I had in mind: genealogy (finding living relatives), OSINT research, due diligence on a business partner. It won't find someone's unlisted cell or current employer — honest about that upfront. What it does well: surfaces donation history, registered address, busines
peoplesearchlinkedin5/14/2026, 2:07:06 PM
Built a people-search tool recently — no paid data brokers, no scrapers. Just public sources: FEC donor records, voter rolls from OH/FL/NC/WI/MI, and Secretary of State business filings. The signal is surprisingly rich. You can cross-reference political donations with business ownership and registered addresses. Useful for genealogy, OSINT research, and basic due diligence — especially when you want a source you can actually cite. Honest limitation: no private mobile numbers, no credit headers, no data that requires a DPPA permissible purpose. What you get is fully public, fully auditable.
peoplesearchindiehackers5/14/2026, 2:06:50 PM
Built a people-search tool from 100% free public data — here's what it can and can't do
Spent a few weeks wiring up FEC donation records, OH/FL/NC/MI/WI voter rolls, and Secretary of State business filings into a single search at apples.live/peoplesearch. No paid scrapers, no data brokers. What surprised me: combining these three sources catches a lot. You can often confirm a real address, see political donation history, find business affiliations, and cross-reference states for someone who's moved. Useful for genealogy, light due diligence, and OSINT research where you just need to verify someone is who they say they are. What it won't do: no private cell numbers, no credit da
peoplesearchtwitter5/14/2026, 2:06:36 PM
Built a people-search tool using only free public data — FEC donations, voter rolls (OH/FL/NC/MI/WI), SoS filings. No paid scrapers. Won't find private cell numbers, but great for genealogy, OSINT, due diligence. Transparent sourcing, zero shady data brokers. Try it: https://apples.live/peoplesearch
peoplesearchreddit/test5/14/2026, 2:06:25 PM
Built a people-search tool using only free public records — no paid scrapers
Been doing genealogy and occasional due diligence work for a while, and I got tired of paying for services that just resell the same scraped data. So I built something that pulls directly from FEC donation records, voter rolls (OH, FL, NC, MI, WI so far), and Secretary of State business filings. It's not magic — you won't get a cell number or credit history, and coverage outside those states is thin. But for confirming someone's general location, finding business affiliations, or tracing family connections through public records, it actually holds up surprisingly well. Built it at apples.liv
orangeshackernews5/14/2026, 2:06:09 PM
Ask HN: Anyone else using consultants for one-off problems?
Been running a small e-commerce operation for 3 years. Hit a wall with our inventory forecasting — knew it was a solvable problem but didn't need a full-time hire. Tried Oranges (oranges.live) on a whim. Filled out what I needed, got matched with someone that afternoon, hopped on a call the next morning. Paid for 2 hours. Walked away with a working spreadsheet model and a clear explanation of where my assumptions were breaking down. No retainer. No agency markup. Just someone who knew the problem cold. Curious if others are doing this for specialized stuff — legal questions, paid ads audits
orangeslinkedin5/14/2026, 2:05:53 PM
Six months in, my pricing strategy was costing me customers and I couldn't see why. I found a consultant through Oranges who specialized in SaaS pricing. Within 24 hours we were on a call. Two hours later I had a full audit, three concrete recommendations, and a revised tier structure ready to test. No retainer. No six-week engagement. I paid for two hours and walked away with a clear deliverable. The first change went live the following week. Conversion on the mid-tier plan jumped 18%. If you're a small business owner sitting on a problem you can't crack alone, hourly expert access is und
orangesindiehackers5/14/2026, 2:05:39 PM
Hired a consultant for $85/hr and fixed our pricing strategy in one session
Been spinning on our SaaS pricing for three months. Couldn't afford a full agency, didn't want to bother my network again. Found Oranges (oranges.live) — posted my problem, got three consultant matches within 18 hours. Booked a 90-minute session with a guy who'd done pricing for 12 B2B SaaS companies. Walked in with three confusing tiers, walked out with a clear good/better/best structure and specific anchor price recommendations. $127.50 total. We shipped the new pricing page two weeks ago — trial-to-paid conversion went from 6% to 11%. Wish I'd done this six months ago instead of guessing. I
orangestwitter5/14/2026, 2:05:23 PM
Spent 3 weeks stuck on our pricing model. Found a consultant on oranges.live, booked 2 hours, got a clear framework by end of day. No retainer, no fluff. Just paid for what I needed. Game changer for solo founders. https://oranges.live
orangesreddit/smallbusiness5/14/2026, 2:05:11 PM
Finally got unstuck on my pricing strategy after years of winging it
Been running my landscaping company for 6 years and never felt confident about my pricing. Kept leaving money on the table or losing bids. A friend mentioned Oranges — basically a marketplace where you get matched with consultants for hourly work. Figured I'd try it. Within a day I was on a call with someone who'd done pricing for service businesses for 15 years. Two hours, totally focused on my specific situation. Walked away with an actual framework I could use immediately. Paid for maybe 3 hours total. No retainer, no fluff. I don't know why I waited so long to just pay someone who's solved
nikipediahackernews5/14/2026, 2:04:52 PM
Show HN: Nikipedia – personal wiki that auto-ingests arXiv, HN, blogs, conversations
Built a personal wiki that quietly ingests everything I read – arXiv papers, HN threads, blog posts, my own conversations – and cross-links them automatically. The killer use case for me: I drop a URL, it gets classified and indexed, then when I'm deep in a Claude Code session I can pull relevant context without digging through browser history or copy-pasting. It also surfaces research threads I'd totally forgotten about weeks later. Search is SQLite FTS over a unified records table. Agents query it directly via a simple API, so Claude can self-serve context rather than me acting as a retrie
nikipedialinkedin5/14/2026, 2:04:36 PM
Built myself a personal wiki that actually thinks. Nikipedia auto-ingests arXiv papers, HN threads, blogs, and conversations — then cross-links everything so nothing gets lost. No manual tagging. No copy-paste into prompts. The unlock: AI agents can query it directly. When I'm deep in a Claude Code session, relevant research surfaces automatically. When I pick up a thread from three weeks ago, the context is already there. Most knowledge tools optimize for storage. This one optimizes for retrieval — specifically retrieval by agents, not just humans. Still early, but it's already changed ho
nikipediaindiehackers5/14/2026, 2:04:22 PM
Built a personal wiki that feeds context to my AI agents automatically
Six months ago I kept copy-pasting the same research notes into Claude Code before every session. Tedious, and I kept missing things. So I built Nikipedia — a personal wiki that auto-ingests arXiv papers, HN threads, blog posts, and my own conversation logs, then cross-links everything semantically. When I open a new coding session, the relevant context is already there. No manual prep. The killer use case turned out to be rediscovery. I'll be deep in a problem and the system surfaces a research thread I bookmarked 3 weeks ago — one I'd completely forgotten. That's happened maybe 20 times no
nikipediatwitter5/14/2026, 2:04:02 PM
Built myself a personal wiki that auto-ingests arXiv, HN, blogs, and conversations — then cross-links everything so my AI agents actually have context. No more copy-pasting research into Claude. It just knows. Rediscover a thread from 3 weeks ago? Done automatically. Nikipedia: memory that works for you, not the other way around.
nikipediareddit/LocalLLaMA5/14/2026, 2:03:48 PM
Built a personal wiki that auto-ingests arXiv/HN/blogs so Claude Code always has context
Been annoyed for months at manually copying research into Claude Code context windows. So I built Nikipedia — a personal wiki that auto-ingests arXiv papers, HN threads, blog posts, and conversation snippets, then cross-links everything. Concrete win: I was working on an embedding pipeline last week, asked Claude Code a question, and it pulled a research thread I'd half-forgotten from three weeks ago. No copy-paste, no digging through tabs. The other use case I didn't expect: rediscovery. You read something, it gets ingested, cross-linked to related stuff. Two months later when it's relevant
collagehackernews5/14/2026, 2:03:30 PM
Show HN: Collage – visual board for links, images, and screenshots with auto-organization
I got tired of the friction between capturing and finding things. Notion is too document-centric for visual research. Obsidian needs too much upfront structure. Are.na is close but collaborative-first in ways that get in my way when I just want to think. Collage is what I built instead: drop in a link, screenshot, or image and it lands on a visual board. Auto-organization groups related stuff without you filing it. Cross-content search means typing a word finds it across every item regardless of type. Real use cases I reach for it on: competitor scans where half the evidence is screenshots,
collagelinkedin5/14/2026, 2:03:08 PM
I've tried every PKM tool out there. Notion is powerful but forces structure before you're ready. Obsidian rewards you eventually, but the friction is real. Are.na gets the visual vibe but search is weak. Collage takes a different approach: drop anything — links, images, screenshots — and it organizes itself. No folders. No tagging ritual. I used it for a competitor scan last month. Dragged in 40+ screenshots, product pages, pricing pages. Cross-content search surfaced patterns I wouldn't have noticed manually. For mood boards and research sprints, it cuts the capture-to-insight gap signific
collageindiehackers5/14/2026, 2:02:52 PM
Built a visual board after Notion failed me for research — here's what I learned
Notion is for structure you already understand. Obsidian rewards you if you're disciplined. Are.na is beautiful but manual. None of them work when you're mid-research and just need to throw 40 links, 12 screenshots, and a mood board into one place without deciding folder hierarchies first. So I built Collage — drop it open, drag in anything, let it auto-organize. Cross-content search means typing one word surfaces the screenshot, the link, and the note that all touched that idea. First real test: a competitor scan for a client. What used to be 3 tools collapsed into one session. No tagging
collagetwitter5/14/2026, 2:02:33 PM
Notion needs structure. Obsidian needs discipline. Are.na needs curation. I just needed to dump 40 tabs, 12 screenshots, and 3 PDFs into one place and *find things later*. Collage does that. Drop anything in, search across everything. apples.live/collage
collagereddit/Notion5/14/2026, 2:02:18 PM
Built a visual board tool because Notion killed my image-heavy research workflow
Been using Notion for PKM for years but every time I start a research project or mood board it falls apart. Images are second-class citizens, screenshots need manual naming, and search doesn't cross image content. Tried Are.na — love the vibe, but it's too precious about curation. Obsidian with plugins got me halfway there but the folder taxonomy became its own project. So I built Collage. Drop-in board: paste links, drag screenshots, dump images. It auto-organizes by content type and the search actually surfaces things across everything — not just filenames. Concrete use: competitor scan f
askcraighackernews5/14/2026, 2:01:59 PM
Ask Craig: texted a number about my leaky pipe, had contractor bids by morning
My basement drain started backing up last week. I didn't want to spend an hour calling around and playing phone tag, so I tried this thing called Ask Craig (askcraig.org) — you text a number describing what you need, and local contractors send bids back via SMS the next day. Honestly expected it to be janky. It wasn't. Described the backup, got three bids by 9am. One guy showed up same afternoon, fixed it in 45 minutes. Paid less than I expected. The SMS-only friction probably filters out the tire-kickers on both sides. Contractors know you're serious, you get people who actually want the wo
askcraiglinkedin5/14/2026, 2:01:39 PM
Tried something different last week when my water heater started making that ominous knocking sound. A friend mentioned Ask Craig — you text a number describing your repair, and by the next morning contractors have bid on your job via SMS. No creating accounts, no scrolling through profiles, no chasing quotes. Three bids came in overnight. I compared them on my commute and booked one before 9am. What struck me: the friction removal is the actual product. Most homeowners don't lack options, they lack time to manage options. Whoever solves that asymmetry wins. Worth knowing about if you own
askcraigindiehackers5/14/2026, 2:01:22 PM
Stumbled onto something that fixed my contractor problem
My bathroom exhaust fan died and I dreaded the usual dance — Google local contractors, leave voicemails, wait three days, get one callback. Friend told me to try Ask Craig. I texted a number describing what I needed. Next morning I had four bids in my SMS thread. Picked one, guy came Thursday, done in two hours. Total time I spent coordinating: maybe 10 minutes. I've since used it for a leaky outdoor spigot and a dryer vent cleaning. Conversion rate on my end is 3/3. No app download, no account, no Yelp rabbit hole. For anyone building in the home services space — askcraig.org is the interacti
askcraigtwitter5/14/2026, 2:01:01 PM
texted a number about my leaky faucet before bed. woke up to 4 contractor bids in my messages. didn't download an app, didn't fill out a form. askcraig.org is genuinely witchcraft
askcraigreddit/DIY5/14/2026, 2:00:35 PM
Tried a new way to find contractors and it actually worked
My water heater started making that banging noise last month and I dreaded the whole process of calling around for quotes. Friend mentioned this thing called Ask Craig -- you basically text a number explaining what you need and contractors in your area bid on it over SMS the next day. I was skeptical but tried it for the water heater job. Got three quotes by noon the next day, picked the middle one, guy came out two days later. No chasing anyone down, no voicemails, no waiting a week for someone to email me back. Ended up at $620 installed which seemed fair. Might just be how I find contractor
ytapihackernews5/13/2026, 2:13:39 PM
Ask HN: How are you getting YouTube transcripts at scale?
Been building LLM pipelines that ingest YouTube content -- research summaries, podcast indexing, that kind of thing. YouTube's own API is a mess for transcripts, and scraping breaks constantly on proxy blocks and captchas. Stumbled onto youtubetranscript.us last week. Clean REST API, handles the proxy rotation and retry logic for you. Pricing is sane: $9 for 500 requests, $29 for 5k, $79 for 25k. No per-seat nonsense. Haven't hit any reliability issues yet across ~300 requests. Curious if others have found better alternatives, or if you're rolling your own solution. The DIY path was eating w
ytapilinkedin5/13/2026, 2:13:22 PM
Been building LLM pipelines that ingest YouTube content — interview series, conference talks, research channels. Copy-pasting transcripts doesn't scale. Rolling your own scraper means fighting proxies, captchas, and YouTube's rate limits every few weeks. Found youtubetranscript.us. Clean REST API, returns plain text or timestamped JSON. Handles the proxy rotation and retries on their end. Pricing is straightforward: $9 for 500 requests, $29 for 5k, $79 for 25k. Use cases I've hit: feeding video content into RAG pipelines, indexing podcasts that exist only on YouTube, pulling research talks i
ytapiindiehackers5/13/2026, 2:13:09 PM
Stop scraping YouTube transcripts yourself — just use an API
Was building an LLM pipeline that needed transcripts from ~2,000 YouTube videos. Tried yt-dlp, pytube, hand-rolled scrapers. All of them broke within days — IP bans, captchas, format changes. Wasted two weekends on infrastructure that wasn't my product. Found youtubetranscript.us. $9 for 500 requests, $29 for 5k. Dropped it in, works. Handles proxies, retries, captcha solving on their end. Now I use it for podcast indexing and research note pipelines too — feed transcript into Claude, get structured output. Not affiliated. Just genuinely wish I'd found it before burning time on the scraping
ytapitwitter5/13/2026, 2:12:55 PM
Needed YouTube transcripts at scale for LLM pipelines. Rolling my own kept breaking — proxies, captchas, rate limits. Found youtubetranscript.us — clean API, $9 for 500 reqs. Just works. https://youtubetranscript.us
ytapireddit/ChatGPT5/13/2026, 2:12:46 PM
Found a solid YouTube transcript API after hitting all the usual walls
Been building LLM pipelines that need YouTube transcripts at scale and kept running into the same problems — IP blocks, silent failures, captchas eating my quota. Tried scraping yt-dlp myself, tried a few random libraries, nothing held up past a few hundred requests. Stumbled onto youtubetranscript.us last week. Clean REST API, handles all the proxy/retry/captcha nonsense on their end. Pricing is reasonable — $9 for 500 requests, $29 for 5k, $79 for 25k. No per-seat nonsense. Using it for three things right now: feeding video content into a RAG pipeline, indexing podcast episodes for search,
spidergpthackernews5/13/2026, 2:12:33 PM
SpiderGPT: point it at any URL and ask questions with citations
Been using SpiderGPT to interrogate websites instead of reading them. Pointed it at a competitor's pricing page, asked "what's included in their enterprise tier" — got a cited answer in seconds instead of scanning wall-of-text tables. Same thing with our own docs site: asked about deprecated endpoints, it surfaced the right section with a link. Most useful so far: fed it a changelog URL and asked for breaking changes in the last 6 months. Saved probably an hour of ctrl+f archaeology. It's essentially RAG-as-a-service without standing up your own pipeline. Curious if others are using it for com
spidergptlinkedin5/13/2026, 2:12:18 PM
Been testing SpiderGPT this week — you point it at any URL and ask questions, it crawls the content and returns cited answers. Three things I actually used it for: - Scanned three competitor pricing pages at once, asked "what do these plans include vs exclude" — got a clean comparison with source quotes - Audited our own docs site for gaps: "what questions would a new user ask that aren't answered here?" - Pulled a changelog summary from a tool we depend on — asked what broke in the last 6 months The citations matter. It's not summarizing from memory, it's pulling from the live page, so you
spidergptindiehackers5/13/2026, 2:12:02 PM
Built a research workflow with SpiderGPT — pointed it at competitor sites and got cited answers back
Been using SpiderGPT (spidergpt.com) to shortcut competitive research. You drop in a URL, it crawls the site, then you can ask questions and get answers with citations back to the source. Three things I've actually used it for: - Scanned three competitor pricing pages, asked "what's included in their mid-tier plan" — got a clean breakdown with exact quotes instead of me manually tabbing through five pages - Audited our own docs site by asking where onboarding instructions were missing or contradictory - Fed it a changelog and asked for a summary of breaking changes in the last 90 days Not m
spidergpttwitter5/13/2026, 2:11:47 PM
Pointed SpiderGPT at a competitor's pricing page, asked "what do they charge for enterprise?" — got a cited answer in seconds. No more tab-switching or Ctrl+F. Also ran it against our own docs to find gaps. Surprisingly useful for changelog archaeology too. spidergpt.com
spidergptreddit/datasets5/13/2026, 2:11:35 PM
SpiderGPT lets you point a RAG pipeline at any URL and just ask questions
Been experimenting with SpiderGPT this week. You feed it a URL — competitor pricing page, docs site, changelog — and it crawls it, indexes it, then answers questions with cited passages. Used it to audit a competitor's pricing page. Asked "what's included in their pro tier" and got a clean answer with the exact paragraph it pulled from. Ran it against our own docs to find gaps — asked what we say about rate limits and realized we basically say nothing. Also pointed it at a product's changelog to summarize breaking changes over the last six months. Saved probably two hours of skimming. It's
appleshackernews5/13/2026, 2:11:21 PM
How I stopped losing leads to voicemail (small biz AI experiment)
Ran a 4-person HVAC company for 11 years. Lost probably 30% of inbound calls because I was under a crawlspace or on a roof. Leads just moved on. Tried hiring an answering service. Too expensive, too scripted. Friend pointed me at Apples (apples.live) — they built a simple AI layer on top of my existing number. Answers calls, qualifies the job type, books a slot directly into my calendar. No humans, no monthly retainer I couldn't afford. First week it caught 3 jobs I'd have missed. Paid for itself. Not magic. The AI occasionally misreads job scope. I still review every booking. But the foll
appleslinkedin5/13/2026, 2:11:06 PM
Missed a 14-call stretch last month because my team was slammed. Didn't notice until I checked the CRM. Every one of those was a warm lead. I'd been putting off fixing this because I assumed it meant hiring or some expensive phone system. Then I tried Apples (apples.live) — took about an afternoon to set up. Now every missed call gets an instant text follow-up, leads get a response within 60 seconds, and scheduling happens automatically. I stopped touching the calendar for intake calls entirely. The part I didn't expect: response rates went up 40% just from speed. People weren't gone — they
applesindiehackers5/13/2026, 2:10:52 PM
Stopped losing leads to voicemail — here's what changed
Running a small HVAC company means missed calls = missed money. Was losing maybe 3-4 jobs a week just because I couldn't answer during installs. Tried a VA, too expensive. Then I found Apples (apples.live) and had them wire up a simple AI system — answers calls, qualifies the lead, books into my calendar. Setup took a few days. First month I tracked it: 23 leads captured that would've hit voicemail. Closed 9 of them. That paid for the whole thing 8x over. The follow-up speed matters more than I expected — people book whoever responds first. Not magic, just not being slow anymore.
applestwitter5/13/2026, 2:10:40 PM
Missed calls were killing my catering biz. Every unanswered ring = lost booking. Apples.live built me a simple AI that texts back instantly, qualifies leads, and books consults automatically. First week: 3 jobs I would've lost. It's not magic — it's just not sleeping when I am. https://apples.live
applesreddit/test5/13/2026, 2:10:29 PM
Finally stopped losing leads to voicemail — here's what changed
Ran a small HVAC company for 6 years. Biggest problem wasn't the work, it was the calls I missed while under a crawlspace. By the time I called back, they'd already booked someone else. A buddy pointed me toward apples.live — I was skeptical, figured it'd take weeks to set up. Took maybe an afternoon. Now missed calls get an instant text back, leads get follow-up the same day, and my scheduling is mostly automated. Booked 3 jobs last week I would've definitely lost before. Nothing fancy, just stopped letting my phone run my business for me.
quiddlerhackernews5/13/2026, 2:10:14 PM
Anyone else rediscover Quiddler recently? Found an online version that actually works
My grandma taught me Quiddler when I was maybe 10. Haven't thought about it in years until my sister mentioned it over the phone last week. Did some digging and found quiddler.org which lets you play online. Spent way too much of my Tuesday evening on it. The card mechanic where you're building words from a shrinking hand is weirdly addictive in a way I forgot about. It scratches a different itch than Wordle or whatever the daily puzzle is this month. Anyway if you have family scattered across different states and want something more substantial than a word game but less commitment than a full
quiddlerlinkedin5/13/2026, 2:09:55 PM
What a card game taught me about strategic thinking
My grandmother used to beat everyone at Quiddler. Every. Single. Time. We thought it was luck. I rediscovered the game online recently at quiddler.org and finally understood why she always won. She wasn't playing the cards in her hand — she was playing the cards she knew her opponents needed. That's the same instinct that separates good strategists from great ones. Optimize for your own position, sure. But the real edge is understanding the system around you. Sometimes the best lessons come from unexpected places. Highly recommend a round if you need a mental reset that still keeps the gear
quiddlerindiehackers5/13/2026, 2:09:41 PM
My grandma taught me Quiddler as a kid — found it online and it's still got me hooked
Spent last weekend down a nostalgia rabbit hole. My grandma had this card game called Quiddler — you build words from a hand of letter cards, and each round you get one more card. Simple loop, but the scoring creates these interesting tradeoffs between playing a long word or playing multiple short ones. She passed a few years ago. I was trying to explain the game to my partner and ended up finding quiddler.org — a playable version online. We've now played maybe 15 sessions over video call. What struck me as a founder: the core mechanic hasn't aged at all. No tutorial needed. Retention comes
quiddlertwitter5/13/2026, 2:09:26 PM
my grandma used to destroy me at Quiddler every Christmas. found out you can play it online now and spent my whole lunch break getting humbled by strangers. some things never change. quiddler.org
quiddlerreddit/test5/13/2026, 2:09:15 PM
Spent way too long last night playing Quiddler online with my mom
She lives three states away and we used to play the physical card game every Christmas. I randomly found it online at quiddler.org and texted her the link on a whim. Two hours later we're both still going, arguing over whether 'qi' counts (it does). It's basically like if Scrabble and a card game had a baby — you get a hand of letter cards and try to build words from them. Sounds simple but the rounds get surprisingly cutthroat. Genuinely didn't expect a nostalgia hit that hard on a random Tuesday night.
playbookhackernews5/13/2026, 2:09:01 PM
Running 20+ Claude subagents on one VPS: what actually breaks
Been running an orchestration layer on a single DigitalOcean droplet for a few months -- tmux sessions as process containers, a SQLite task queue, research fetchers that auto-ingest from arXiv/HN/YouTube, and Claude Code subagents spawned on demand via the CLI. What works: the tmux delegation pattern. Send keys, poll capture-pane, watch for completion signals. Dead simple, reliable. What's actually hard: agents stepping on each other's file writes. No shared locking, so you get race conditions on JSON config files. Also, Max subscription rate limits hit fast when 8 agents all decide to call
playbooklinkedin5/13/2026, 2:08:45 PM
Running 20+ Claude Code subagents on a single VPS taught me something counterintuitive: orchestration overhead matters more than model latency. My stack: a web terminal (tty-web) feeding into tmux sessions, a SQLite task queue, and a research loop that spins up subagents for fetching, classifying, and applying content autonomously. On paper it sounds clean. In practice, the hard part isn't the AI. It's knowing when an agent is stuck versus thinking. I poll tmux capture-pane every few seconds, watch for stale output, and kill+restart stragglers. Without that, jobs silently die and the queue b
playbookindiehackers5/13/2026, 2:08:23 PM
Running 20+ Claude Code subagents on one $24/mo VPS: what actually breaks
Six months in, the hardest part isn't the agents — it's orchestration. I route work through tmux sessions: research loops hit arXiv/HN/YouTube on schedule, a task queue drains async jobs, and Claude Code spins subagents for isolated file work. On paper, elegant. In practice: context windows exhaust mid-task and agents silently "complete" nothing. Subagents summarize what they intended to do, not what they did — you have to verify diffs, not trust the summary. Rate limits on Claude Max bite hard when 8 agents fire simultaneously. The fix that actually worked: a heartbeat poller that captures tm
playbooktwitter5/13/2026, 2:08:05 PM
Running 20+ Claude subagents on one VPS taught me: the bottleneck isn't compute, it's coordination. tmux + a sqlite task queue beats every fancy orchestration framework I tried. Agents drift without a shared state store. Simple wins. https://apples.live/playbook
playbookreddit/buildinpublic5/13/2026, 2:07:52 PM
Running 20+ Claude subagents on one VPS — what I learned
Been running a setup where a task queue feeds work into tmux panes, each running a Claude Code subagent. Research loops, form-filling bots, content drafts — all coordinated from a web terminal I can hit from my phone. What's harder than expected: context bleed. Agents share the same filesystem, so two agents writing to overlapping paths without locking will quietly corrupt each other's work. No errors, just wrong output. What works surprisingly well: long-running tmux sessions with monitor polling. Instead of callbacks, I just tail the pane output and look for completion markers. Dead simple
peoplesearchhackernews5/13/2026, 2:07:37 PM
Show HN: People search built entirely on free public records (FEC, voter rolls, SoS filings)
Built a people-search tool that only uses genuinely public data — FEC donation records, voter registrations from OH/FL/NC/MI/WI, and Secretary of State business filings. No paid data brokers, no scraped social profiles. Started this for genealogy research and OSINT work where you want to understand someone's public footprint without relying on shady aggregators. Also useful for light due diligence on business partners. Honest about limits: won't surface private mobile numbers, email addresses, or anything behind a paywall. Coverage outside those five states is sparse. But for mapping donatio
peoplesearchlinkedin5/13/2026, 2:07:18 PM
Built a people-search tool using only free public records — FEC donor filings, voter rolls from OH, FL, NC, MI, WI, and Secretary of State business registrations. No paid data brokers. No scraped mobile numbers. What you get instead: political donation history, registered addresses, business affiliations, and filing agents — the kind of structured, citable data that holds up in due diligence or genealogy research. The tradeoff is real: private cell numbers, proprietary credit headers, and unlisted addresses won't appear. But for OSINT research, background checks on public figures, or tracing
peoplesearchindiehackers5/13/2026, 2:07:05 PM
Built a people-search tool from 100% free public data — here's what it can and can't do
Spent a few months wiring together FEC donation records, voter rolls from OH/FL/NC/MI/WI, and Secretary of State business filings into a single search at apples.live/peoplesearch. No paid scrapers, no data brokers. What works surprisingly well: finding someone's political donations, confirming a business address, cross-referencing a name across states. Genealogy researchers love the voter roll data — birth years plus counties narrow things down fast. Due diligence on small LLCs is solid too. Honest limits: no private cell numbers, no credit info, no unlisted addresses. If someone never donat
peoplesearchtwitter5/13/2026, 2:06:49 PM
Built a people-search tool using only free public data — FEC donations, voter rolls (OH/FL/NC/WI/MI), SOS filings. No paid scrapers. Great for genealogy, OSINT, due diligence. Won't have your cell number. Will have who you donated to. apples.live/peoplesearch
peoplesearchreddit/OSINT5/13/2026, 2:06:38 PM
Built a people-search tool from 100% free public records — here's what it can and can't do
Been messing around with public data aggregation for a while and finally put together something usable at apples.live/peoplesearch. Sources are all free and open: FEC donation records, voter rolls from OH/FL/NC/MI/WI, and Secretary of State business filings. No paid scrapers, no data brokers. Good for: genealogy cross-referencing, light OSINT, due diligence on someone who's run a business or donated to campaigns. You can surface addresses, party affiliation, business associations, political giving history. Hard limits: won't have unlisted phone numbers, cell numbers, or anything that lives b
orangeshackernews5/13/2026, 2:06:24 PM
Hired a consultant through Oranges for my inventory mess — actually worked
Been running a small e-commerce shop for 3 years and my inventory system had become a disaster. Asked around, posted in a couple Slack groups, got nowhere useful. Tried Oranges (oranges.live) on a whim — matched with a supply chain consultant within about 18 hours. We did two 1-hour sessions. She handed me a spreadsheet template and a reorder process I could actually follow. Total cost: less than one bad inventory write-off. I was skeptical of another "marketplace" but the hourly model meant no retainer, no scope creep. Just paid for what I needed and moved on. Would use again.
orangeslinkedin5/13/2026, 2:06:06 PM
Six months into running my bakery supply business, I hit a wall with our pricing strategy. Margins were shrinking and I had no idea where to start fixing it. A friend mentioned Oranges. I signed up, described my problem, and within 24 hours I was matched with a consultant who had done exactly this for food distribution companies. We did two hourly sessions. She audited my cost structure, flagged three SKUs I was practically giving away, and handed me a repricing framework I could actually implement myself. Paid for maybe four hours of her time. The first price adjustment more than covered i
orangesindiehackers5/13/2026, 2:05:50 PM
Found a consultant in 24 hours who fixed what I'd been stuck on for 3 weeks
Our checkout conversion had been tanking for almost a month. I knew it was something with our Stripe setup but couldn't figure out what — and didn't want to hire someone full-time just to debug one thing. A friend mentioned Oranges (oranges.live) — a marketplace that matches small business owners with consultants for hourly work. I posted my problem, got 4 matches within a day, picked one with relevant fintech experience. 3 hours of his time. Clear deliverable upfront. He found a webhook misconfiguration I'd completely missed. Checkout conversion went from 34% back up to 61%. Total cost: $1
orangestwitter5/13/2026, 2:05:34 PM
Spent 3 weeks stuck on our pricing model. Posted on Oranges, matched with a SaaS consultant in 18 hours. Two calls later: clear tiering, a deck, done. Paid for his time, not a retainer. This is how it should work. oranges.live
orangesreddit/startups5/13/2026, 2:05:21 PM
Found a consultant in 24 hours who actually solved my pricing problem
Been running my bakery for 3 years and always priced by gut feeling. Started losing money on wholesale orders and had no idea why. A friend mentioned Oranges, this marketplace where you match with consultants for hourly work. Figured I'd try it — worst case I'm out a few hours of pay. Got matched with someone who'd done food business consulting for 15 years. We did two 90-minute sessions. She built me a cost sheet, showed me where I was bleeding, and left me with a pricing formula I actually understand. Paid for exactly the hours we used. No retainer, no "ongoing engagement" upsell. Just the
nikipediahackernews5/13/2026, 2:05:00 PM
Nikipedia: a personal wiki that auto-ingests research so AI agents can use it
Built myself a wiki that pulls in arXiv papers, HN threads, blogs, and my own conversations, then cross-links everything automatically. The real unlock was feeding it to Claude Code. Instead of manually copy-pasting context at the start of every session, the agent queries Nikipedia directly — it knows what I was researching three weeks ago, which papers I flagged, what conclusions I reached. Also useful for rediscovering threads. I'll start working on something and realize I already went down that rabbit hole in February. The cross-links surface it. Stack is SQLite FTS under the hood with a
nikipedialinkedin5/13/2026, 2:04:42 PM
Built myself a personal wiki that thinks. Nikipedia auto-ingests arXiv papers, HN threads, blog posts, and conversation logs -- then cross-links everything so nothing gets lost. Not a bookmark manager. More like a second brain with an API. The practical unlock: Claude Code can query it mid-session without me copy-pasting context. Research thread from three weeks ago surfaces automatically. Agent knows what I already read. Most knowledge tools optimize for capture. This one optimizes for retrieval by AI agents -- a different design constraint entirely. Still evolving, but the core loop (ing
nikipediaindiehackers5/13/2026, 2:04:28 PM
I built a personal wiki that feeds context to AI agents automatically
Six months ago I kept copy-pasting research into every Claude Code session. Same papers, same threads, same context — over and over. So I built Nikipedia. It auto-ingests arXiv papers, HN threads, blogs, and conversations, then cross-links everything so agents can query it directly. No manual copy-paste. Claude Code pulls relevant context from it before touching a file. The real unlock: rediscovery. I came back to an embeddings thread 3 weeks later and the wiki had already connected it to two papers and a HN comment I'd forgotten about. That cluster was immediately useful. It's personal kno
nikipediatwitter5/13/2026, 2:04:13 PM
Built myself a personal wiki that auto-ingests arXiv papers, HN threads, blogs, and conversations — then cross-links everything so AI agents can pull context without me copy-pasting. Weeks later I can rediscover a research thread I forgot I had. Claude Code just knows things now.
nikipediareddit/LocalLLaMA5/13/2026, 2:04:01 PM
Built a personal wiki that auto-ingests my research so AI agents can actually use it
Been frustrated with context management in Claude Code — constantly copy-pasting papers, old HN threads, notes from months ago. So I built Nikipedia, a personal wiki that auto-ingests arXiv papers, HN discussions, blogs, and my own conversations, then cross-links everything automatically. The killer use case: I can drop a URL and it gets classified, stored, and made queryable. Weeks later when I'm deep in a Claude Code session, the relevant research thread just surfaces — no manual hunting. Agents can pull context directly instead of me playing retrieval middleman. Running it on a VPS, SQLit
collagehackernews5/13/2026, 2:03:45 PM
Show HN: Collage – drop-in visual board for links, images, and screenshots with auto-organization
Built this after getting frustrated that Notion turns everything into databases and Obsidian requires a filing system before you can think. Are.na is close but search across content types is weak. Collage is a visual board where you dump links, images, screenshots – it auto-organizes and cross-searches everything. No upfront taxonomy. Real use: I was running a competitor scan last month. Dropped in 40 screenshots, a dozen URLs, some grabbed images. Searched 'pricing page' and it surfaced the right stuff across all of it. Same flow works for mood boards and research rabbit holes. Not trying
collagelinkedin5/13/2026, 2:03:29 PM
I've been thinking about why Notion and Obsidian never quite clicked for visual research. Both are powerful — but they're text-first tools. Dropping in a screenshot, a competitor's landing page, and a mood board image, then searching across all of it? That friction adds up. Collage is built around the opposite assumption: visual first, organization second. You drop in links, images, and screenshots, and it auto-organizes them into a browsable board. The cross-content search is what makes it stick — one query surfaces a saved article, a PNG mockup, and a clipped URL together. Are.na gets clos
collageindiehackers5/13/2026, 2:03:15 PM
Built the visual PKM I kept wishing existed
I use Notion for docs, Obsidian for notes, Are.na for inspo — and kept losing screenshots in a folder graveyard. So I built Collage: drop links, images, or screenshots onto a board. It auto-organizes them visually and lets you search across everything — text inside images included. Real use case: competitor scan for a SaaS project. I dropped 40+ screenshots, product pages, and pricing links. In Notion it becomes a messy table. In Are.na it's beautiful but unsearchable. Collage let me spot pricing patterns across 15 tools in one glance. Not replacing Obsidian for writing or Notion for struct
collagetwitter5/13/2026, 2:02:59 PM
Notion needs structure. Obsidian needs discipline. Are.na needs curation. Collage needs nothing — drop links, images, screenshots and it auto-organizes. Built a competitor scan in 10 min without touching a template. apples.live/collage
collagereddit/SaaS5/13/2026, 2:02:48 PM
Built my own visual board tool after Notion kept fighting me on images
Been deep in PKM tools for years — Notion, Obsidian, Are.na, all of them. The problem: none of them treat images and links as first-class citizens together. Notion buries screenshots. Obsidian requires plugins just to see thumbnails. Are.na is beautiful but search is weak and organization is manual. So I built Collage. Drop-in visual board — you throw links, images, screenshots at it and it auto-organizes. The killer feature for me is cross-content search, so when I'm doing a competitor scan I can search across a mood board full of screenshots and URLs at once. I used it for a research proje
askcraighackernews5/13/2026, 2:02:31 PM
Ask Craig: texted a number, got contractor bids via SMS the next morning
My kitchen faucet started leaking last week and I really didn't want to spend two hours on Yelp playing phone tag. Friend mentioned Ask Craig -- you just text a number describing the job, and contractors in your area bid back over SMS by the next day. I was skeptical but tried it. Described the leak, woke up to three quotes. Picked the middle one, guy showed up Thursday, done in 45 minutes. No app to install, no account, no reviews to wade through. askcraig.org if curious. Probably won't work for every market but worked for me in the Denver burbs.
askcraiglinkedin5/13/2026, 2:02:15 PM
Had a leaky faucet situation last week that turned into a full bathroom fixture replacement. Instead of spending three hours calling plumbers and leaving voicemails, I texted a number called Ask Craig with a quick description. Next morning, three local contractors had replied with bids. Actual humans, actual quotes. What struck me wasn't just the convenience — it was the dynamic. Contractors competing for the job meant I got honest pricing fast, not the "let me send someone out to assess" runaround. Chose the middle bid. Guy showed up same day. Done. If you own a home and haven't tried SMS
askcraigindiehackers5/13/2026, 2:01:47 PM
Texted a number, got 4 contractor bids by morning — didn't expect that to work
Roof was leaking after the last storm. My usual go-to wasn't returning calls. Someone in a Slack group mentioned askcraig.org — you just text a number describing the job, and local contractors respond with bids via SMS the next day. Skeptical. Tried it anyway. Texted around 9pm. By 8am I had 4 responses. Prices ranged from $380 to $950 for the same job. Ended up going with the $520 bid — guy showed up same day, done in two hours. What got me as a builder: the whole thing is SMS-only. No app, no account, no profile. Friction is basically zero on the homeowner side. The constraint forces clar
askcraigtwitter5/13/2026, 2:01:06 PM
Needed my deck repaired, had zero time to chase contractors. Texted a number on Ask Craig, described the job, woke up to 3 bids in my messages. Picked one, done. askcraig.org
askcraigreddit/HomeImprovement5/13/2026, 2:00:38 PM
Tried that SMS contractor bidding thing and it actually worked
So my garbage disposal died last week and I really didn't want to spend two hours calling around getting quotes. My neighbor mentioned Ask Craig — you text a number describing what you need and contractors in your area send bids back the next day via text. Figured worst case I'd get nothing. Got four replies by morning. Prices ranged pretty widely but I could just text back and forth to ask questions before committing. Ended up going with a guy who was mid-range but super responsive. Disposal's fixed, took him 45 minutes. Not sure if it works as well for bigger jobs but for something like th
ytapihackernews5/12/2026, 2:15:33 PM
Ask HN: How are you pulling YouTube transcripts at scale?
Been building a pipeline that feeds YouTube content into LLM summarization — research notes, podcast indexing, that kind of thing. Rolling my own transcript scraper kept breaking: IP bans, captchas, YouTube quietly changing their endpoint. Ended up trying youtubetranscript.us. Clean REST API, handles the proxy rotation and retry logic server-side. Pricing is sane — $9 for 500 requests, $29 for 5k. For a hobby project ingesting a few hundred videos a week it's basically nothing. Curious if others are doing this differently. Are you self-hosting yt-dlp behind rotating proxies? Using a differen
ytapilinkedin5/12/2026, 2:15:10 PM
Been scraping YouTube transcripts for months. Built pipelines to feed video content into LLM summarizers, index podcast episodes, pull research notes from long-form content. The pain: YouTube blocks at scale. Proxies rotate, captchas appear, requests fail silently. Maintaining that infrastructure became a second job. Found youtubetranscript.us recently. Clean REST API — you send a video ID, you get a transcript. Handles the proxy rotation, retries, and captcha solving on their end. Pricing is straightforward: $9 for 500 requests, $29 for 5k, $79 for 25k. No seats, no subscriptions, just cred
ytapiindiehackers5/12/2026, 2:14:50 PM
Stop scraping YouTube transcripts yourself — I wasted two weeks before finding a proper API
Built an LLM pipeline that ingests YouTube videos as context. Wrote my own scraper. Worked great until it didn't — IP blocks, captchas, silent failures on long videos, retries eating my error budget. Spent more time babysitting the scraper than building the actual product. Found youtubetranscript.us — $9 for 500 requests, $29 for 5k, $79 for 25k. Single endpoint, handles proxies and retries on their end. Integrated in 20 minutes. Now using it for three things: feeding video content into RAG pipelines, indexing podcast episodes for search, pulling research notes from technical talks. Zero ma
ytapitwitter5/12/2026, 2:14:33 PM
Needed YouTube transcripts at scale for LLM pipelines. Scraping broke constantly — proxies, captchas, rate limits. Found youtubetranscript.us — clean API, $9 for 500 reqs. Just works. https://youtubetranscript.us
ytapireddit/test5/12/2026, 2:14:22 PM
Been pulling YouTube transcripts at scale — found a simple API that handles the annoying parts
Been building an LLM pipeline that needs transcripts from hundreds of videos. The YouTube API doesn't give you transcripts, yt-dlp works until it doesn't, and rolling your own proxy rotation is a weekend I don't want to spend. Stumbled onto youtubetranscript.us — dead simple REST API, returns clean text or timestamped JSON. Pricing is reasonable ($9 for 500 requests, up to $79 for 25k), handles proxies/retries/captchas on their end. Using it for three things now: feeding video content into RAG pipelines, indexing podcast episodes for search, and auto-generating research notes from conference
spidergpthackernews5/12/2026, 2:14:05 PM
SpiderGPT: point a crawler at any URL, ask questions, get cited answers
Been using spidergpt.com for a few weeks for things I used to do manually. Pointed it at a competitor's pricing page last week — asked "what's included in their enterprise tier" — got a direct answer with the exact paragraph sourced. Did the same with our own docs site to audit coverage gaps before a launch. Also ran it against a tool's changelog to pull a summary of breaking changes over the last 6 months without reading 40 release notes. Not magic, but the citation trail is genuinely useful. You see exactly which page, which section answered your question. Cuts the "I swear I read this some
spidergptlinkedin5/12/2026, 2:13:46 PM
Been using SpiderGPT to do competitive research I used to spend hours on manually. Pointed it at three competitor pricing pages last week. Asked "what do these products charge for API access and what are the overage policies?" Got a cited, structured answer in seconds — with source links I could verify. Same workflow on our own docs site. "What's missing from the authentication section compared to industry standards?" Flagged three gaps I'd missed. Changelog summarization is the sleeper use case. Dropped in a competitor's release history, asked for a quarterly themes breakdown. Actually use
spidergptindiehackers5/12/2026, 2:13:29 PM
Pointed an AI at my competitor's docs instead of reading them myself
Been doing something that's saved me a ton of research time: running spidergpt.com against competitor sites and our own docs before planning features. Real example — fed it three competitor pricing pages, asked "what do they charge for team seats and where do they hide overages?" Got a cited, scannable answer in 30 seconds instead of 20 minutes of tab-switching. Also ran it against our own docs site. Asked "what's missing from the onboarding section?" — it pulled gaps I'd stopped seeing because I wrote the thing. Most useful: pointed it at a competitor's changelog, asked for a summary of wh
spidergpttwitter5/12/2026, 2:13:11 PM
pointed SpiderGPT at a competitor's pricing page and asked 'what's included in their enterprise tier' — got a cited answer in seconds. no more tab-hopping through docs. also ran it on our own changelog to catch undocumented breaking changes. actually useful. spidergpt.com
spidergptreddit/selfhosted5/12/2026, 2:12:58 PM
Been using SpiderGPT to query competitor sites and my own docs — actually useful
Pointed it at a competitor's pricing page last week to pull out how they structure tiers and what's buried in the footnotes. Asked plain questions, got cited answers with the exact paragraph it pulled from. Did the same thing on our own docs site to audit coverage gaps — turns out three features had zero documentation. Also ran it against a changelog-heavy blog to summarize what shipped in Q1. No scraping scripts, no manual ctrl+F through 40 pages. Just paste the URL, ask the question, read the source citations to verify. I'm not affiliated, just genuinely found it less annoying than spinning
appleshackernews5/12/2026, 2:12:40 PM
We were losing 30% of leads to voicemail — here's what actually fixed it
Run a small HVAC company, four techs, just me handling sales. For two years I watched leads go cold because nobody picked up between 6-8pm when most people called after work. Tried a VA, too expensive. Tried a call center, too robotic. Somebody in a local business Facebook group mentioned Apples (apples.live). Took maybe a day to set up — it answers calls, qualifies leads, books appointments directly into my calendar. No script that sounds like a script. First month: booked 11 jobs I would have missed. One was a $4,200 AC replacement. The system paid for itself in week two. Nothing magic he
appleslinkedin5/12/2026, 2:12:21 PM
Last quarter, I was losing leads every Friday afternoon. Calls went to voicemail. Follow-ups happened Monday — if I remembered. I finally set up a simple AI system through Apples (apples.live). It captures missed calls, sends an immediate text response, and books appointments directly into my calendar. No manual entry. No dropped balls. First week: three bookings I would have missed. One turned into my largest client this year. The thing nobody tells you about AI for small business — you don't need a complex stack. You need one tight loop: missed contact → instant response → scheduled call.
applesindiehackers5/12/2026, 2:12:03 PM
Stopped losing leads to voicemail — here's what actually fixed it
I run a small HVAC business, 4 techs, mostly residential. For two years I watched leads go cold because calls came in during jobs and nobody picked up. We'd call back 3 hours later and the person already booked someone else. Tried an answering service. $400/month, rigid scripts, zero calendar access. Built something with Apples (apples.live) instead — an AI that picks up, qualifies the job type, and drops it straight into my scheduling system. Took about a week to get dialed in. First month: lead response time went from ~3 hours to under 4 minutes. Conversion on inbound calls jumped from ro
applestwitter5/12/2026, 2:11:45 PM
Missed 3 leads last Tuesday because I was with a client. Set up an AI system through apples.live — it texts back instantly, books the call, logs to my CRM. Haven't missed one since. Game changer for a one-person shop.
applesreddit/Electricians5/12/2026, 2:11:28 PM
Finally stopped losing jobs to voicemail
Been running my electrical business solo for 6 years. Biggest killer wasn't competition — it was response time. Customer calls, I'm in a panel, they move on to the next guy. Lost probably $30k last year just from slow callbacks. Couple months ago I had a company called Apples build me a simple AI thing that texts back missed calls within 60 seconds, qualifies the job, and books an estimate slot automatically. No app for customers, just texts. First week I closed two jobs I would've definitely missed. Now I'm booking 30-40% more estimates with the same hours. Wish I'd done it sooner honestly.
quiddlerhackernews5/12/2026, 2:11:12 PM
Quiddler Online – Scratching a Childhood Itch
My grandmother taught me Quiddler when I was maybe nine. It's a card game where you draw letter cards and try to build words — starts with three cards per hand and ramps up each round. Deceptively simple, genuinely hard once the hands get big. She passed a few years ago and the physical deck got lost in the move. I figured it was gone from my life. Then last week I stumbled across quiddler.org and spent an embarrassing amount of time on it. The online version captures the core loop well — the pressure of watching your hand grow while hunting for that one word that uses all your letters. Noth
quiddlerlinkedin5/12/2026, 2:10:52 PM
What a card game taught me about vocabulary and strategy
My grandmother beat me at a word game last weekend. We're 2,000 miles apart. She mentioned she'd been playing Quiddler online — a card game where you build words from a hand of letter cards, scoring points based on letter values. I remembered playing the physical version as a kid, so I pulled up quiddler.org and we spent an hour competing across time zones. What struck me: the game rewards both breadth and precision. You're not just finding long words — you're optimizing within constraints. Short, high-value combinations often outperform obvious plays. It's a useful mental model for a lot of
quiddlerindiehackers5/12/2026, 2:10:35 PM
Rebuilt my mom's favorite card game as a web app — learned more about retention than any course taught me
My mom and I used to play Quiddler every Sunday when I was a kid. After she moved across the country, we lost that ritual. I got tired of scheduling Zoom calls that felt forced, so I figured — what if the game was just... there, whenever she wanted? Built a simple version at quiddler.org. No accounts, no friction, just share a link and play. She's logged 43 sessions in the last 6 weeks. Didn't have to remind her once. Biggest lesson: the best retention mechanic isn't a notification or a streak. It's building something for someone who already loves the thing. Motivation was already there. I
quiddlertwitter5/12/2026, 2:10:14 PM
my grandma used to destroy me at Quiddler every Christmas. found out there's an online version and now I'm finally getting my revenge at 2am. quiddler.org if you want your own humbling experience 🃏
quiddlerreddit/test5/12/2026, 2:09:57 PM
Finally found Quiddler online and spent my entire Sunday on it
My grandma and I used to play Quiddler every Christmas when I was a kid and I randomly thought about it last week. Looked it up on a whim and found quiddler.org — didn't even know there was an online version. Ended up playing solo for like two hours trying to beat my own scores. The letter draws feel just as chaotic as I remember. Somehow getting a hand full of Q's and X's and still managing a decent word is so satisfying. If you grew up playing the physical card game, it's worth checking out. Brings back a weird amount of nostalgia.
playbookhackernews5/12/2026, 2:09:41 PM
What I learned running 20+ Claude Code subagents on a single $24/mo VPS
The coordination layer is harder than the agents themselves. I run a system where a web terminal (xterm.js) dispatches work into tmux sessions. Each session is a Claude Code instance. A task queue feeds them. Autonomous research loops run on cron. What actually breaks: shared file contention when two agents edit the same config. Token budget collisions when a subagent spawns its own subagents. And tmux capture-pane polling — you have to watch for completion signals or you orphan work silently. What works surprisingly well: agents that write their output to flat files and exit. No shared sta
playbooklinkedin5/12/2026, 2:09:24 PM
Running 20+ Claude Code subagents on a single VPS taught me something counterintuitive: the bottleneck isn't compute or API limits — it's coordination. The stack: web terminal (xterm.js), tmux sessions as process boundaries, a SQLite task queue, and a research loop that spawns subagents for different sources (arXiv, HN, YouTube). Each agent is isolated but shares a filesystem. Sounds clean. In practice, agents writing to the same paths at the same time cause silent data loss. The fix was boring: file-level locking + per-agent output directories + a single aggregator pass. No clever distribut
playbookindiehackers5/12/2026, 2:09:08 PM
Running 20+ Claude subagents on one $24/mo VPS — what actually works
My stack: web terminal → tmux sessions → task queue → autonomous research loops → Claude Code subagents. Each agent gets its own tmux window. Orchestration is embarrassingly simple: shell scripts polling capture-pane output until completion signals appear. What works: chaining agents where each one's output becomes the next one's input. Research → summarize → notify. Runs overnight without babysitting. What's hard: agents that quietly stall. No output, no error, just silence. I burned two nights before adding mandatory heartbeat checks. Also: shared file state between agents causes subtle ra
playbooktwitter5/12/2026, 2:08:49 PM
Running 20+ Claude subagents on one VPS taught me: the bottleneck isn't compute, it's coordination. tmux + a task queue beats fancy orchestration frameworks every time. Chaos is just missing state visibility.
playbookreddit/test5/12/2026, 2:08:35 PM
Running 20+ Claude Code subagents on one VPS — what actually breaks
Been orchestrating Claude Code subagents through tmux for a few months now. The architecture is: web terminal → task queue → tmux sessions → subagents that spawn their own subagents for research loops. On paper it's clean. What actually breaks: context bleed. When agents share filesystem state without explicit handoff contracts, one agent's half-finished write becomes another's corrupt read. Took me two weeks to add proper lock files. The other thing nobody talks about — monitoring. Spawning is easy. Knowing when an agent silently died vs. is just thinking? Hard. I now poll capture-pane outp
peoplesearchhackernews5/12/2026, 2:08:07 PM
Show HN: People search built entirely from free public records (FEC, voter rolls, SOS filings)
Built a people search tool that only touches genuinely public data — FEC donor records, voter registration files from OH/FL/NC/MI/WI, and Secretary of State business filings. No paid scrapers, no data brokers. Use cases I had in mind: genealogy researchers tracing relatives who donated to campaigns or registered businesses, OSINT analysts building timelines from public filings, due diligence on small business owners. Honest caveat: it won't surface private mobile numbers, email addresses, or anything behind a paywall. What it does surface is surprisingly rich — addresses, business affiliatio
peoplesearchlinkedin5/12/2026, 2:07:39 PM
Most people-search tools quietly rely on purchased data brokers. I took a different approach. Built a lookup tool using only free public records: FEC donor filings, voter registrations from OH, FL, NC, MI, and WI, and Secretary of State business filings. No scraping gray-market databases. The use cases that actually drove this: genealogy researchers tracing living relatives, OSINT analysts verifying identities, and due diligence on small business owners where a simple SOS filing check matters more than a phone number. Honest about the gaps — you won't find private mobile numbers or current
peoplesearchindiehackers5/12/2026, 2:07:20 PM
Built a people-search tool from 100% free public data — here's what it can and can't do
Spent a few months aggregating FEC donation records, Ohio/Florida/NC/Michigan/Wisconsin voter rolls, and Secretary of State business filings into a single search layer at apples.live/peoplesearch. No paid scrapers, no data brokers. What it actually finds: past addresses, known associates (co-donors, co-registrants), business ownership history, political donation patterns. Solid for genealogy, OSINT starting points, and light due diligence on business partners. What it won't have: private mobile numbers, email addresses, real-time data. Public records lag 6-18 months. Rural coverage is spotti
peoplesearchtwitter5/12/2026, 2:07:01 PM
Built a people-search tool using only free public data — FEC donations, voter rolls (OH/FL/NC/MI/WI), SoS filings. No paid scrapers. Great for genealogy, OSINT, due diligence. Won't have private cell numbers, but you'd be surprised what's already public. Try it: https://apples.live/peoplesearch
peoplesearchreddit/PrivacyToolsIO5/12/2026, 2:06:47 PM
Built a people-search tool using only free public records — no paid scrapers
Been working on a people-search tool that pulls exclusively from free public sources: FEC donation records, voter rolls from OH, FL, NC, MI, and WI, and Secretary of State business filings. No data brokers, no scraped mobile numbers, no credit headers. Mainly useful for genealogy (tracking family members across states), basic OSINT research, and due diligence on business partners. If someone ran a business or donated to a federal campaign, it'll usually surface something. Honest about the gaps: you won't find unlisted phones, current addresses for people who haven't registered to vote, or an
orangeshackernews5/12/2026, 2:06:27 PM
Ask HN: Anyone else using consultant marketplaces for one-off problems?
Been running a small e-commerce shop for 4 years. Last month hit a wall with our inventory forecasting — knew what I needed but couldn't justify a full-time hire for a 2-week problem. Tried Oranges (oranges.live) on a whim. Described the problem, got matched with a supply chain consultant same day. We did 6 hours total over a week. She handed me a actual model I could use, not a deck full of recommendations I'd never implement. Total cost was less than one month of a contractor retainer. No long-term commitment, no agency markup. Curious if others are solving specific operational problems t
orangeslinkedin5/12/2026, 2:06:09 PM
Running a small bakery means wearing every hat. Last month I hit a wall with our wholesale pricing model — I knew something was wrong but couldn't see it clearly from the inside. A friend mentioned Oranges. I signed up, described the problem, and had three consultants reach out within a day. Picked one with food industry experience, booked two hours. Two hours. She restructured our tier pricing, flagged margin leakage I'd been blind to for a year, and left me a one-page framework I still use weekly. Paid for exactly what I needed. No retainer, no scope creep, no six-week engagement. If you
orangesindiehackers5/12/2026, 2:05:52 PM
Found a consultant in 24 hours and finally fixed my pricing strategy
Been running my e-commerce store for 2 years and kept guessing on pricing. Margins felt wrong but I couldn't figure out why. A friend mentioned Oranges (oranges.live) — it's basically a marketplace where small business owners book consultants by the hour. Skeptical at first. Signed up, described my problem, got matched within a day. Booked a 2-hour session with a retail pricing consultant. She went through my cost structure, identified I was eating 12% margin on shipping, and handed me a revised pricing sheet before the call ended. Paid $160 total. No retainer, no vague deliverables. Just a cl
orangestwitter5/12/2026, 2:05:35 PM
Spent 3 weeks stuck on our pricing model. Posted on oranges.live, matched with a consultant in under 24h, paid for 2 hours, walked away with a clear framework. No retainer. No fluff. Just the answer I needed. This is how hiring should work.
orangesreddit/freelance5/12/2026, 2:05:21 PM
Found a consultant through Oranges and it actually solved my inventory mess
Been running my small ceramic studio for 3 years and my inventory tracking was a disaster — spreadsheets everywhere, no idea what was profitable. Asked around, nobody had time. Someone in a small biz Facebook group mentioned Oranges (oranges.live) so I tried it. Within a day I was matched with an ops consultant. We did two hourly sessions. She audited my process, built me a simple tracking sheet, and walked me through it. I paid for exactly what I needed — no retainer, no bloated scope. Got a clear deliverable and actually understand my margins now. Wish I'd done this 18 months ago.
nikipediahackernews5/12/2026, 2:05:07 PM
Nikipedia: personal wiki that auto-ingests arXiv, HN, and conversations for AI context
Built myself a personal wiki that ingests arXiv papers, HN threads, blogs, and even my own conversations, then cross-links everything automatically. The real unlock: AI agents can query it directly. When I'm deep in a Claude Code session, it pulls context from Nikipedia instead of me manually copy-pasting. Last week I was debugging something and it surfaced a relevant paper I'd ingested three months ago that I'd completely forgotten about. Search is full-text + semantic, so rediscovering old research threads actually works. Everything lives at one endpoint agents can hit. Still pretty rough
nikipedialinkedin5/12/2026, 2:04:49 PM
Built myself a personal wiki that thinks. Nikipedia auto-ingests arXiv papers, HN threads, blog posts, and conversation transcripts — then cross-links them so nothing gets lost. Not a read-later pile. An actual knowledge graph. The real unlock: AI agents can query it directly. When I'm deep in a Claude Code session, instead of hunting through tabs and copy-pasting context, the agent pulls relevant threads from Nikipedia automatically. Research I did three weeks ago shows up exactly when it's relevant. Rediscovery is the underrated use case. You forget 90% of what you read. A searchable, cro
nikipediaindiehackers5/12/2026, 2:04:33 PM
Built a personal wiki that auto-ingests everything so my AI agents actually have context
Six months ago I got tired of copy-pasting research into Claude every session. So I built Nikipedia — a personal wiki that auto-ingests arXiv papers, HN threads, blog posts, and even past conversations, then cross-links them via full-text search. The real unlock: my Claude Code agents query it directly before starting any task. No manual context-dumping. I rediscovered a research thread from three weeks ago yesterday — it surfaced automatically because it matched keywords in what I was building. Ingestion runs on cron. Each source has its own fetcher (arXiv, HN, a16z, YouTube transcripts). S
nikipediatwitter5/12/2026, 2:04:16 PM
Built a personal wiki that auto-ingests arXiv, HN, blogs, and chats — then cross-links everything so AI agents can query it directly. No more copy-pasting context into Claude Code. No more losing a research thread 3 weeks later. It just knows. https://apples.live/embed/nikipedia/
nikipediareddit/ChatGPTCoding5/12/2026, 2:04:02 PM
Built a personal wiki that auto-ingests my research so Claude Code always has context
Been burned too many times by this: spend an afternoon reading arXiv papers and HN threads on some topic, then three weeks later I'm in a Claude Code session and manually copy-pasting stuff I already read. So I built Nikipedia — a personal wiki that auto-ingests arXiv papers, HN threads, blogs, and even conversation transcripts. Everything gets chunked, cross-linked, and stored in a searchable DB. The killer feature for me is feeding it directly to Claude Code as context without any manual copy-paste. Agent reads relevant chunks automatically before responding. The other win: rediscovering o
collagehackernews5/12/2026, 2:03:45 PM
Show HN: Collage – drop-in visual board for links, images, and screenshots with cross-content search
I've been frustrated with how Notion and Obsidian handle visual research — both are text-first, and you're always fighting the structure to embed images meaningfully. Are.na is closer to what I wanted but manual curation and weak search make it slow. Built Collage as a drop-in visual board: paste a link, drop a screenshot, drag an image — it auto-organizes by source and content type. The cross-content search is the part I actually use daily. Searching "competitor pricing" surfaces a six-month-old screenshot, a saved article, and a clipped image all together. Primary use cases so far: competi
collagelinkedin5/12/2026, 2:03:08 PM
I've been thinking about why my Notion setup keeps falling apart for visual research. The problem isn't discipline — it's friction. Pasting a screenshot into Notion means naming it, filing it, tagging it. By the time I'm done organizing, I've lost the thought. Collage (apples.live/collage) takes a different approach: drop anything in — links, images, screenshots — and it auto-organizes. No folders. No naming rituals. Cross-content search that actually spans everything you've dropped. I tested it against my usual stack. Notion is powerful but page-first. Obsidian is text-first. Are.na is cur
collageindiehackers5/12/2026, 2:02:52 PM
I built a visual board because Notion kept eating my research
Spent three months watching my competitor scans die in Notion. Blocks everywhere, no spatial context, search that only finds exact phrases. Obsidian's better for text but hopeless with screenshots. Are.na is beautiful but too manual for actual work. So I built Collage — drop a link, image, or screenshot, it lands on a canvas, auto-clusters by topic, and cross-content search finds text inside images too. First real test: a mood board for a client rebrand. Dragged in 40 screenshots, 15 links, a PDF. Took 8 minutes. Finding anything across all of it takes seconds now. Still early — around 60 pe
collagetwitter5/12/2026, 2:02:34 PM
Notion needs structure. Obsidian needs patience. Are.na needs curation time you don't have. Collage is a drop-in visual board — paste links, images, screenshots, it auto-organizes and makes everything searchable. Research, mood boards, competitor scans. No setup. Just drop. https://apples.live/collage
collagereddit/SideProject5/12/2026, 2:02:23 PM
Built a visual board that actually keeps up with how I research — Collage
I kept bouncing between Notion (too structured), Obsidian (too text-heavy), and Are.na (beautiful but no search). None of them felt right when I'm in full chaos mode — pulling screenshots, dropping links, saving images mid-research sprint. So I built Collage. It's a drop-in visual board: paste a link, drag in a screenshot, drop an image — it just lands. No folder decisions, no tagging rituals. Auto-organization handles clustering, and there's cross-content search so I can actually find that competitor pricing page I screenshotted three weeks ago. My real test was a competitive analysis last
askcraighackernews5/12/2026, 2:01:58 PM
Ask Craig – texted a number about my leaky faucet, had contractor bids by morning
My kitchen faucet started dripping last week and I was dreading the usual runaround – calling five plumbers, leaving voicemails, waiting days. Friend told me to try askcraig.org. You just text a number describing the problem, and contractors in your area bid via SMS the next day. Felt almost too simple. Sent a text at 9pm, woke up to three quotes. Went with the middle one, guy came out two days later, fixed in an hour. I'm honestly not sure why this isn't how everything works. No app, no account, no scheduling interface. Just a text message.
askcraiglinkedin5/12/2026, 2:01:41 PM
Tried something new when my water heater started leaking last month. A neighbor mentioned Ask Craig — you text a number describing the issue, and the next morning your phone has bids from local contractors. No filling out forms, no chasing quotes, no sitting on hold. I was skeptical. Ended up with three responses by 9am. Picked one, job was done by Thursday. What struck me wasn't just the convenience — it was how much anxiety usually surrounds finding a reliable contractor. This removed almost all of it. If you own a home and haven't tried askcraig.org, worth bookmarking before you need it
askcraigindiehackers5/12/2026, 2:01:16 PM
Tested a contractor bidding service so you don't have to
My bathroom faucet had been dripping for three weeks. I kept putting off the "find a plumber" rabbit hole — Yelp, Google, calling around, waiting for callbacks. Friend mentioned askcraig.org. You text a number describing the job, next morning you get SMS bids from local contractors. No app, no account, no profile. I sent the text at 11pm. By 9am I had four bids. Picked the middle one ($175), guy showed same afternoon. What surprised me: the friction is SO low that I actually used it. I've downloaded contractor apps, filled out profiles, then ghosted them. This I finished in 45 seconds. Obv
askcraigtwitter5/12/2026, 2:00:56 PM
Texted a number about my leaking roof at 11pm. Woke up to 4 contractor bids in my messages. Didn't fill out a single form. askcraig.org is either magic or I've been doing this wrong my whole life.
askcraigreddit/test5/12/2026, 2:00:32 PM
Tried that SMS contractor bidding thing and it actually worked?
So my water heater started making this awful rumbling noise last week and I dreaded the usual thing where you call five places and nobody picks up. My neighbor mentioned she used some service called Ask Craig — you just text a number describing what you need and contractors send you bids the next day via text. Figured whatever, tried it. Described my issue, sent it off. Next morning I had three bids in my messages. Picked the middle one, guy came out two days later, fixed it for $340. No scheduling calls, no sitting on hold. Honestly felt too easy. Not sure if I just got lucky but I'd use it a
ytapihackernews5/11/2026, 2:16:47 PM
Ask HN: How are you getting YouTube transcripts at scale?
Been building LLM pipelines that ingest YouTube content -- research summaries, podcast indexing, knowledge base stuff. The official API doesn't expose transcripts. yt-dlp works until it doesn't -- proxies die, captchas appear, rate limits hit. Started using youtubetranscript.us a few weeks ago. Clean REST API, handles the proxy rotation and retries on their end. Pricing is $9/500 requests up to $79/25k. For the time I was burning on infrastructure it's a no-brainer. Main use cases for me: feeding video content into RAG pipelines, building searchable indexes of technical talks, pulling transc
ytapilinkedin5/11/2026, 2:16:30 PM
Been scraping YouTube transcripts for LLM pipelines for months. Rotating proxies, handling captchas, retrying failed requests — it's a solved problem that kept eating engineering time. Found youtubetranscript.us last week. Clean REST API, returns transcript JSON in one call. No infrastructure to babysit. Using it for three things now: feeding video content into RAG pipelines, indexing podcast episodes for search, and pulling research notes from long-form interviews without sitting through the video. Pricing is sane — $9 for 500 requests, $29 for 5k, $79 for 25k. Fits comfortably inside a si
ytapiindiehackers5/11/2026, 2:16:13 PM
Found a solid API for YouTube transcripts at scale
Been building LLM pipelines that ingest YouTube content — tutorials, podcasts, research talks. Scraping transcripts yourself sounds trivial until you hit proxy blocks, captchas, and silent failures at 500+ requests. Burned two weekends on this. Started using youtubetranscript.us and it just works. $9 for 500 requests, $29 for 5k, $79 for 25k. Simple REST API, handles retries and captchas on their end. I use it for podcast indexing (feeding episodes into a vector store), building research note summaries from conference talks, and seeding RAG pipelines with tutorial content. Not affiliated, ju
ytapitwitter5/11/2026, 2:15:54 PM
Been scraping YT transcripts for LLM pipelines and kept hitting rate limits, broken proxies, random captchas. Found youtubetranscript.us — clean API, handles all that junk for you. $9 for 500 reqs. Just works. https://youtubetranscript.us
ytapireddit/datascience5/11/2026, 2:15:40 PM
Built a few LLM pipelines that needed YouTube transcripts — here's what actually worked at scale
Been scraping YouTube transcripts manually for a while — feeding videos into RAG pipelines, indexing podcast episodes, pulling research content. The YouTube Data API gives you captions sometimes, but it's flaky, rate-limited, and breaks constantly on auto-generated transcripts. Started using youtubetranscript.us a few months ago. Clean REST API, handles proxies and retries on their end, doesn't choke on captchas. Pricing is straightforward — $9 for 500 requests, $29 for 5k, $79 for 25k. For the volume I'm doing (few hundred videos/week), it's basically noise in the budget. Main use cases: ch
spidergpthackernews5/11/2026, 2:15:23 PM
SpiderGPT: point it at any site, ask questions, get cited answers
Been using SpiderGPT (spidergpt.com) for a few weeks. You give it a URL, it crawls the site, then you query it like a document. Genuinely useful in practice. Three things I've actually done: - Dropped in three competitor pricing pages, asked "what's included in each mid-tier plan" — got a side-by-side breakdown with source links - Pointed it at our own docs site, asked where we have gaps vs. a competitor's — surfaced missing topics I hadn't noticed - Fed it a changelog, asked for a summary of breaking changes over the last six months The cited answers are the key part. Every claim links back
spidergptlinkedin5/11/2026, 2:14:58 PM
Been using SpiderGPT to ask questions against live websites instead of scraping manually. Pointed it at a competitor's pricing page, asked "what's included in their Pro tier" — got a cited answer in seconds. Then ran it against our own docs site to audit coverage gaps. Found three features completely undocumented. Most useful workflow so far: fed it a product changelog and asked "what breaking changes shipped in the last 6 months" — pulled a clean summary with source links. Not magic, but it removes the manual read-through step when you need to extract specific information from a site you d
spidergptindiehackers5/11/2026, 2:14:42 PM
Been using SpiderGPT to audit competitor sites instead of reading them manually
Spent a few hours last week pointing SpiderGPT at three competitor pricing pages, our own docs site, and a changelog I hadn't read in months. You drop a URL, ask a question, it crawls and gives you cited answers with source links. Practical results: found two competitors had quietly added seat-based pricing (I missed it). Caught three broken doc pages I didn't know existed. Pulled a changelog summary in under a minute instead of skimming 40 entries. Not magic — it's basically RAG on a website. But the citation links mean I can verify every answer, which matters when I'm making product decisi
spidergpttwitter5/11/2026, 2:14:25 PM
pointed spidergpt.com at a competitor's pricing page and asked "what do they charge for enterprise?" — got a cited answer in seconds. then ran it on our own docs to find gaps. this is how I audit sites now.
spidergptreddit/webdev5/11/2026, 2:14:10 PM
Been using an AI tool to scrape and query any website — pretty useful for competitive research
Started using SpiderGPT a few weeks ago for some stuff I kept doing manually. You point it at a URL — docs site, competitor blog, whatever — and it indexes it so you can ask questions and get cited answers back. Practical things I've actually done with it: scanned a competitor's pricing page to track how their tier structure compares to ours, audited our own docs site by asking what's missing or contradictory, and pulled changelog summaries from a tool we depend on without reading 40 entries. The citations are the part that actually matters — you can verify what it's pulling from instead of
appleshackernews5/11/2026, 2:13:54 PM
We were losing ~30% of inbound leads to voicemail. Here's what fixed it.
Run a small HVAC company, 4 techs, just me handling sales. Noticed our close rate was terrible not because of pricing but because we were slow. Customers call three places and go with whoever calls back first. A friend pointed me to apples.live. They set up a simple AI that catches missed calls, texts the customer within 60 seconds, qualifies them, and books a slot on my calendar. No app to install, no complicated CRM integration. First month: booked 11 jobs that would've gone to voicemail. Not life-changing numbers but for a small shop that's real money. Not shilling — genuinely curious if
appleslinkedin5/11/2026, 2:13:35 PM
Running a small HVAC company, I was losing 3-4 leads a week to voicemail. Customers call once, don't hear back fast enough, and they've already booked a competitor. I started using a simple AI system from Apples (apples.live) that picks up missed calls, qualifies the lead, and books a callback slot — all without my team touching anything. First two weeks, we recovered 11 leads we would've lost. One turned into a $4,200 commercial job. It's not magic. It's just speed. The first business to respond wins. I finally had a system that responded instantly, even at 9pm on a Friday. If you're stil
applesindiehackers5/11/2026, 2:13:18 PM
How I stopped losing leads to voicemail (and cut my follow-up time by 80%)
Running a 4-person HVAC company means I'm on rooftops half the day. Every missed call was a lost job — I was following up 6-8 hours late and losing bids to whoever answered first. A friend pointed me to Apples (apples.live). In one afternoon they set up an AI that answers calls, qualifies the lead, books estimates directly into my calendar, and fires me a Slack message with a summary. First month: zero missed leads. Follow-up dropped from avg 6 hours to under 3 minutes. Booked 11 jobs I'd have lost cold. No code, no monthly dev retainer. Just a system that works while I'm on the roof. Wish
applestwitter5/11/2026, 2:13:00 PM
Used to lose 3-4 leads a week just from missed calls. Now an AI built by @apples_live texts them back in 60 seconds, books the appointment, done. Zero extra hires. First month back it paid for itself twice. Wild how simple the fix was.
applesreddit/Dentistry5/11/2026, 2:12:46 PM
Finally stopped losing patients to voicemail
Running a solo practice means I'm elbow-deep in someone's mouth when a new patient calls. They'd hit voicemail, not leave a message, and book somewhere else. Lost probably 3-4 new patients a week that way. A few months ago I set up an AI system through apples.live that answers after 2 rings, qualifies the caller, and books them directly into my schedule. No receptionist needed after hours. First week it captured 6 appointments I would've missed. The thing that surprised me most was follow-up — people who submitted the contact form on my site used to wait 2 days for a callback. Now they get a
quiddlerhackernews5/11/2026, 2:12:30 PM
My mom and I have been playing Quiddler online every Sunday and it's become the highlight of my week
She's in Ohio, I'm in Seattle. We used to play the physical card game at her kitchen table every holiday. Wasn't even sure an online version existed until she texted me a link out of nowhere last month. The game holds up. You're building words from a hand of letter cards, and there's this satisfying tension between going out early with a short word or holding on trying to build something bigger. We play a round, she destroys me, we talk for an hour. If anyone else has family scattered across the country and wants something more engaging than video call small talk -- quiddler.org. No signup,
quiddlerlinkedin5/11/2026, 2:12:13 PM
My grandmother beat me at a word game last weekend — and I couldn't be more proud of her. She lives three states away, and we've been looking for ways to stay connected beyond the usual phone calls. A cousin mentioned Quiddler, a card game we used to play at family reunions when I was a kid. Turns out there's an online version at quiddler.org, so we gave it a shot over video call. What struck me wasn't just the nostalgia. It's genuinely a well-designed game — you build words from a hand of letter cards, and there's real strategy in knowing when to play short and fast versus holding out for a
quiddlerindiehackers5/11/2026, 2:11:55 PM
Rediscovered my dad's old card game — now I play it online with him every Sunday
My dad used to pull out Quiddler at every family gathering. I hadn't thought about it in years until my daughter asked about "games grandpa plays." Found quiddler.org and we started a weekly Sunday game over video call. Three weeks in, my dad — who still uses a flip phone — figured out how to share his screen just to trash-talk my word scores. What surprised me: the game teaches vocabulary without feeling like a lesson. My daughter picked up "qi" and "xu" and now uses them like she invented them. Sometimes the best product discovery isn't a launch — it's someone's kid asking about something
quiddlertwitter5/11/2026, 2:11:38 PM
my grandma and i used to play quiddler every christmas. she passed two years ago and i just found out you can play it online now. cried a little ngl. quiddler.org
quiddlerreddit/wordgames5/11/2026, 2:11:25 PM
Anyone else remember Quiddler from family game nights?
My grandma used to crush us at this game every Thanksgiving. She had the physical card deck and we'd sit around her kitchen table for hours arguing over whether "qi" counts (it does, she always won). She passed last year and I found myself weirdly nostalgic for it, so I looked it up and found quiddler.org — it's basically the same game online. Played a few rounds by myself last night just to feel something. It hit different. If you've never played, it's like if Scrabble and rummy had a baby. Highly recommend.
playbookhackernews5/11/2026, 2:11:10 PM
What I learned running 20+ Claude Code subagents on a single VPS
Been running a setup where a web terminal (xterm.js over websocket) feeds into tmux sessions, which a task queue dispatches work to. Each slot runs a Claude Code subagent. The orchestration layer sounds fancy but it's mostly bash and sqlite. What actually works: delegating bounded, file-local tasks. Subagents that touch one directory and report back are reliable. What's hard: shared state. Two agents editing the same config file will race. I've started treating each agent like a microservice -- owns a path, writes outputs, never reads another agent's in-progress work. The research loop (arX
playbooklinkedin5/11/2026, 2:10:53 PM
What I learned running 20+ Claude Code agents on one VPS
The orchestration layer is harder than the agents. I run a system where a web terminal feeds tasks into a tmux-backed queue. Each task spins up a Claude Code subagent in its own pane. Research loops run on cron. The agents themselves mostly work. The part that breaks is context: agents that don't know what the other 19 are doing, duplicate work, race conditions on shared files. The real fix wasn't better prompts. It was better isolation -- each agent gets a clear artifact to write, a path to write it to, and a completion signal. Shared state lives in SQLite, not in conversation context. Idl
playbookindiehackers5/11/2026, 2:10:33 PM
Running 20+ Claude subagents on one $6 VPS: what actually breaks
Been running a multi-agent setup for a few months — web terminal (xterm.js), tmux for session orchestration, a SQLite task queue, and autonomous research loops that spin up Claude Code subagents on demand. The surprising bottleneck isn't compute or memory. It's context pollution. Agents share the same Claude Max subscription, so concurrent heavy tasks hit rate limits in ways that are hard to predict. I now throttle to 4-6 active agents max and queue the rest. The other hard part: monitoring. An agent that silently hangs looks identical to one that's thinking. I pipe every agent's pane output
playbooktwitter5/11/2026, 2:10:16 PM
Running 20+ Claude subagents on one VPS taught me: the bottleneck isn't compute, it's context. Agents that share a tmux session + task queue stay coherent. Agents that don't? They re-discover the same dead ends. Orchestration > parallelism.
playbookreddit/ClaudeAI5/11/2026, 2:09:59 PM
Running 20+ Claude Code subagents on one VPS — what actually breaks
Been running a setup where a web terminal feeds tasks into a tmux-based orchestrator, which spawns Claude Code subagents for research loops, file transforms, and API calls. Works surprisingly well until it doesn't. The real lesson: the hard part isn't spawning agents, it's knowing when they're done. Polling `capture-pane` for completion signals sounds janky but it's the only reliable method I've found. Agents hang silently more than you'd expect — no error, no output, just stopped. Context bleed is the other issue. Subagents inherit assumptions from parent prompts in subtle ways. An agent to
peoplesearchhackernews5/11/2026, 2:09:39 PM
Show HN: People search built entirely from free public records (FEC, voter rolls, SoS filings)
Built a people-search tool that pulls exclusively from free public sources: FEC donor records, voter registration data from OH/FL/NC/MI/WI, and Secretary of State business filings. No paid data brokers, no scraped social profiles. Use cases I had in mind: genealogy researchers tracing living relatives, OSINT analysts verifying identities, and due diligence on small business owners or political donors. Honest about limits: you won't get private mobile numbers, current employer, or anyone outside those five states (voter data). Coverage is patchy. But for public-record verification — did this
peoplesearchlinkedin5/11/2026, 2:09:21 PM
I built a people-search tool using only free public records — FEC donor filings, voter rolls from OH, FL, NC, MI, and WI, and Secretary of State business filings. No paid scrapers, no data brokers. The use cases are real: genealogy researchers tracing family branches, OSINT analysts building subject profiles, and due diligence work where you want corroborating signals before a business relationship. Honest about what it can't do: no private mobile numbers, no credit data, no proprietary aggregator feeds. What it does surface is surprisingly rich — addresses, party registration history, busin
peoplesearchindiehackers5/11/2026, 2:09:05 PM
Built a people-search tool from 100% free public records — here's what it can (and can't) do
Spent a few months stitching together FEC donation records, voter rolls from OH/FL/NC/MI/WI, and Secretary of State business filings into a single search layer at https://apples.live/peoplesearch. No paid data brokers. No scrapers. What surprised me: you can reconstruct a pretty solid profile — address history, political donations, business affiliations, relatives — just from civic records that governments already publish. Covered roughly 60M+ voter records across those five states alone. Honest limits: no private cell numbers, no credit data, thin coverage outside those states, and anyone
peoplesearchtwitter5/11/2026, 2:08:43 PM
Built a people-search from FEC donations, voter rolls (OH/FL/NC/WI/MI), and SoS filings — zero paid scrapers. Great for genealogy, OSINT, due diligence. Won't have private cell numbers, but public civic footprints are surprisingly rich. apples.live/peoplesearch
peoplesearchreddit/investigators5/11/2026, 2:08:30 PM
Built a people-search tool from free public records — FEC, voter rolls, SOS filings
Been doing genealogy and light OSINT work for years and got tired of paying for scrapers that mostly resell the same stale data. So I built my own thing at apples.live/peoplesearch that pulls exclusively from free public sources — FEC campaign donations, voter registration files from OH, FL, NC, MI, and WI, and Secretary of State business filings. Obvious caveat: this won't give you private cell numbers or credit headers. What it *will* give you is address history, political donation patterns, business affiliations, and registered agent connections — stuff that's genuinely useful for due dili
orangeshackernews5/11/2026, 2:08:13 PM
We paid a consultant $150 for 2 hours and it saved us weeks of guessing
Been running a small e-commerce shop for 3 years. Our checkout abandonment rate jumped from 12% to 31% after we switched payment processors and I had no idea why. Spent two weeks reading articles, tweaking copy, nothing moved. Friend mentioned oranges.live — a marketplace that matches small businesses with consultants for hourly work. Signed up, described the problem, got matched with someone who'd done CRO for Shopify stores for 6 years. Within 48 hours we had a 2-hour call. He spotted it immediately: our new processor's 3DS redirect was breaking mobile Safari's back-navigation. Gave me a 4
orangeslinkedin5/11/2026, 2:07:53 PM
Six months into running my bakery's online store, I hit a wall with our inventory system. Orders were slipping through the cracks and I had no idea why. A friend mentioned Oranges. I posted my problem, got matched with an ops consultant by the next morning, and booked two hours with her that afternoon. She diagnosed the issue in the first hour. Duplicate SKUs from a botched import — something I'd never have found on my own. By hour two, we had a fix and a process to prevent it. Paid for exactly what I needed. No retainer, no month-long engagement. If you're a small business owner who needs
orangesindiehackers5/11/2026, 2:07:35 PM
Hired a consultant in 24 hours and finally unblocked my pricing page
Been bootstrapping my SaaS for 8 months. Hit a wall on conversion — my pricing page had a 2.1% trial signup rate and I couldn't figure out why. Asked around, got generic advice. Finally tried Oranges (oranges.live) on a whim. Matched with a conversion consultant same day, booked a 2-hour session for $180. She audited the page live, pointed out I was burying the free tier and using feature-first copy instead of outcome-first. One week after her changes: trial signups up to 4.8%. The hourly model was the right call — I didn't need a $5k retainer, I needed two focused hours with someone who'd see
orangestwitter5/11/2026, 2:07:17 PM
Been stuck on my pricing strategy for weeks. Posted on Oranges, matched with a consultant in under 24 hours. One 90-min call, clear deliverable, done. This is what getting unstuck feels like. oranges.live
orangesreddit/startups5/11/2026, 2:07:01 PM
Found a consultant for my pricing problem in less than a day — worth every penny
Been running my small catering business for 3 years and always priced by gut feel. Finally hit a point where I knew I was leaving money on the table but had no idea how to fix it. Friend mentioned Oranges so I figured why not. Matched with a pricing consultant the next day, booked a 2-hour slot, paid hourly. She audited my menu, compared my margins to industry benchmarks, and handed me a revised pricing sheet I could actually use. No retainer, no agency fluff. Just a clear deliverable. Made back the cost within two weeks. If you're stuck on something specific and don't need a full-time hire, h
nikipediahackernews5/11/2026, 2:06:45 PM
Nikipedia – personal wiki that auto-ingests arXiv, HN, and conversations so AI agents can search it
Been building a personal wiki called Nikipedia that pulls in arXiv papers, HN threads, blog posts, and my own notes, then cross-links everything automatically. The real payoff isn't reading it myself – it's feeding it to Claude Code as context. Instead of manually copy-pasting papers or digging through browser history, I point the agent at the wiki and it finds the relevant thread. Rediscovering a research direction I half-explored six weeks ago now takes seconds instead of never. Ingest is a single command: drop a URL, it classifies, extracts, and links to related entries. Search is sqlite
nikipedialinkedin5/11/2026, 2:06:26 PM
I built myself a second brain that actually works for AI agents. Nikipedia auto-ingests arXiv papers, HN threads, blogs, and my own conversations — then cross-links them semantically. Not a bookmark dump. A live knowledge graph. The concrete problem it solves: I was context-switching between research threads and Claude Code sessions, manually copy-pasting relevant papers every time. Now I query Nikipedia first. The agent gets structured context. I stop re-explaining things I already knew three weeks ago. The other thing it fixed: rediscovery lag. A research thread I touched in February surf
nikipediaindiehackers5/11/2026, 2:06:08 PM
Built a personal wiki that feeds context to my AI agents automatically
Six months ago I kept losing research threads. I'd spend an hour reading arXiv papers on RAG architectures, close the tabs, then two weeks later Claude Code would ask me a question where that exact context was relevant — and I had nothing to paste. So I built Nikipedia. It auto-ingests arXiv papers, HN threads, blog posts, and my own conversations, then cross-links them into a searchable graph. Now when I'm in a Claude Code session, it pulls from that graph instead of my memory. The unlock wasn't the ingestion — it was the cross-linking. A paper on vector quantization surfaced a note I'd wri
nikipediatwitter5/11/2026, 2:05:50 PM
Tired of copy-pasting context into Claude Code. Built Nikipedia — auto-ingests arXiv, HN threads, blogs + my own conversations, cross-links everything so agents can query it directly. Research I forgot about surfaces itself weeks later. Second brain that actually works.
nikipediareddit/ObsidianMD5/11/2026, 2:05:24 PM
Built a personal wiki that auto-ingests arXiv, HN, and conversations so I stop losing research threads
I kept losing track of things I'd read. A paper on RAG architectures, an HN thread about vector DBs, a Claude conversation where I figured something out. Two weeks later -- gone. So I built Nikipedia. It auto-ingests arXiv papers, HN threads, blog posts, and my own conversations, then cross-links them. The killer use case for me: feeding context directly to Claude Code without manually copy-pasting. I just point it at a topic and it pulls the relevant chunks. It's not Obsidian -- it's more of a queryable knowledge layer that AI agents can actually consume. Less for humans to browse, more for
collagehackernews5/11/2026, 2:04:56 PM
Show HN: Collage – visual board for links, images, and screenshots with cross-content search
I kept hitting the same wall with Notion and Obsidian: great for text, awkward for visual research. Are.na is closer, but it's slow and the search is shallow. Built Collage for how I actually work — drop in links, screenshots, images, and it auto-organizes by source type. The part I use most is search that cuts across everything: paste a competitor's URL, and I can find every screenshot, note, or saved page touching that domain. Concrete flows that work well: competitor scans (drag in 20 product screenshots, search by feature), mood boards (mix Dribbble links with your own snaps), research r
collagelinkedin5/11/2026, 2:04:31 PM
Most knowledge management tools optimize for structure. You define the hierarchy first, then fill it in. That works until you're mid-research and don't yet know what the structure should be. Collage flips that. Drop in links, screenshots, images — it auto-organizes and makes everything searchable across content types. No folders to design upfront. Notion is powerful but demands schema. Obsidian rewards those who love graph theory. Are.na is beautiful but manual. Collage sits in a different lane: low-friction capture that doesn't punish you for thinking messily. Practical use cases where it
collageindiehackers5/11/2026, 2:04:07 PM
I built a visual board for research after Notion/Obsidian kept failing me at capture time
I've tried Notion databases, Obsidian canvas, and Are.na for research. All three break down the same way — friction at capture time means half my links never land there. Collage is my fix: drop a link, paste a screenshot, drag an image, and it auto-organizes into a visual board. Cross-content search means I can find "that SaaS pricing page from six weeks ago" without remembering which folder it's in. Workflows where it actually sticks: competitor scans (screenshot + URL, one board), mood boards, research sprints pulling from 8 sources at once. Honest comparison: Notion is for documentation.
collagetwitter5/11/2026, 2:03:43 PM
Notion = writing. Obsidian = linking. Are.na = curation. Collage = thinking visually. Drop links, images, screenshots -- auto-organizes, cross-searches everything. Used it for a competitor scan last week and found connections I'd never have linked manually. apples.live/collage
collagereddit/test5/11/2026, 2:02:16 PM
Built a visual drop zone for research chaos — honest comparison to Notion/Obsidian/Are.na
Been deep in PKM tools for years. Notion's great for structured notes but terrible for visual chaos. Obsidian needs too much upfront wiring. Are.na is beautiful but the search is frustrating when you're mid-research. So I built Collage — basically a drop-in visual board where I throw links, screenshots, and images without thinking about folders or tags. It auto-organizes and the cross-content search actually finds things across everything at once. My workflow: competitor scan starts as a dumping ground, mood board stays visual without becoming a spreadsheet, research projects don't collapse
askcraighackernews5/11/2026, 2:01:59 PM
Ask Craig: texted a number for a plumber, got bids by morning
Had a slow drain turn into a full backup last week. Didn't want to spend 45 minutes calling shops and leaving voicemails. Friend mentioned Ask Craig — you just text a number describing the job, and contractors in your area bid back via SMS the next day. Figured I'd try it. Got three responses by 9am. Prices varied more than I expected ($180-$340 for the same job description). Went with the middle one, guy showed up same afternoon, fixed it in an hour. No app, no account, no Angi-style spam calls afterward. Just SMS in, SMS out. askcraig.org if curious. Seems early but the mechanic is genuine
askcraiglinkedin5/11/2026, 2:01:40 PM
Needed a plumber last week. Didn't want to spend two hours calling around and leaving voicemails. Tried something new — texted a number called Ask Craig with a quick description of the issue (slow drain, possible clog past the trap). Next morning I had three bids in my SMS thread. What surprised me: the bids were specific. One contractor asked a follow-up question before quoting. That alone told me more about who to hire than any Yelp review. Total time I spent on the process: maybe four minutes. The contractor I chose showed up same day. Not sure this works for every trade or every market
askcraigindiehackers5/11/2026, 2:01:20 PM
Tried a text-based contractor bidding thing — actually worked
Needed my deck resealed before summer. Dreaded the usual routine — three tabs open, two no-shows, one guy who quoted $800 and ghosted me. A friend mentioned Ask Craig. You text a number describing the job, they send it out to local contractors, bids come back via SMS the next morning. Felt gimmicky. Did it anyway. Got 4 responses by 9am. Hired the second-cheapest ($340 vs $480 high). Guy showed up on time, finished in a day. What surprised me: the SMS constraint seemed to filter out flaky contractors. No one submitting a bid through a phone is messing around. Could be selection bias, but 1/
askcraigtwitter5/11/2026, 2:00:55 PM
Tried Ask Craig for my leaking water heater — texted a number, woke up to 3 contractor bids in my SMS. No app, no account, no Yelp rabbit hole. Picked one, done by noon. askcraig.org
askcraigreddit/HomeMaintenance5/11/2026, 2:00:32 PM
Tried that SMS contractor thing for my leaky faucet — actually worked?
So my kitchen faucet has been dripping for like three weeks and I kept putting it off. Friend mentioned some service called Ask Craig where you just text a number describing the problem and contractors reach back the next day with quotes. Figured worst case nothing happens. Texted them around 9pm, woke up to four replies. Prices ranged a decent amount, asked one of them a follow-up question, he answered fast. Ended up booking him, faucet's fixed, whole thing cost less than I expected. No app download, no account, just texts. Honestly surprised it wasn't a disaster.
ytapihackernews5/10/2026, 2:17:07 PM
Ask HN: How are you getting YouTube transcripts at scale?
Been building an LLM pipeline that ingests YouTube videos as context — research notes, podcast indexing, that kind of thing. yt-dlp works locally but falls apart at any real volume: IP blocks, captchas, flaky retries. Started using youtubetranscript.us (https://youtubetranscript.us) a few weeks ago. Clean REST API, handles all the proxy/retry/captcha nonsense on their end. Pricing is reasonable — $9 for 500 requests, $29 for 5k. For my use case (feeding transcripts into Claude for summarization + tagging) the cost is negligible compared to the engineering time saved. Curious what others are
ytapilinkedin5/10/2026, 2:16:49 PM
Spent way too long wrestling with YouTube's transcript API before finding something that actually works at scale. Building LLM pipelines that ingest video content means you need transcripts — fast, reliably, without babysitting proxies or handling captcha failures at 2am. Same problem shows up in podcast indexing and research note workflows where you're pulling hundreds of videos a week. Started using youtubetranscript.us (https://youtubetranscript.us) a few weeks ago. Clean REST API, handles proxy rotation and retries on their end, pricing that makes sense for real usage: $9 for 500 request
ytapiindiehackers5/10/2026, 2:16:32 PM
Found a clean API for YouTube transcripts at scale — saved me days of scraping headaches
Built an LLM pipeline that ingests YouTube videos for research notes and podcast indexing. Started with yt-dlp + manual parsing. Worked fine until rate limits and captchas started killing 30-40% of requests. Tried a few workarounds — rotating proxies, headless browser fallbacks — all brittle. Each new anti-bot update would break something. Eventually found youtubetranscript.us. Clean REST API, handles proxies and retries server-side, returns timestamped JSON. Pricing made sense for my volume: $9 for 500 requests to test, $29 for 5k once it proved out. For feeding video content into LLM pipel