At this point I'm fully down the path of the agent just maintaining his own tools. I have a browser skill that continues to evolve as I use it. Beats every alternative I have tried so far.
dtkav 2 hours ago [-]
Same. Claude Opus 4.5 one-shots the basics of chrome debug protocol, and then you can go from there.
Plus, now it is personal software... just keep asking it to improve the skill based on you usage. Bake in domain knowledge or business logic or whatever you want.
I'm using this for e2e testing and debugging Obsidian plugins and it is starting to understand Obsidian inside and out.
chrisweekly 50 minutes ago [-]
Cool! Have you written more about this? (EDIT: from your profile, is that what https://relay.md is about?)
kinduff 3 hours ago [-]
whats the name of the skill?
gregpr07 2 hours ago [-]
Creator of Browser Use here, this is cool, really innovative approach with ARIA roles. One idea we have been playing around with a lot is just giving the LLM raw html and a really good way to traverse it - no heuristics, just BS4. Seems to work well, but much more expensive than the current prod ready [index]<div ... notation
binalpatel 6 hours ago [-]
Cool to see lots of people independently come to "CLIs are all you need". I'm still not sure if it's a short-term bandaid because agents are so good at terminal use or if it's part of a longer term trend but it's definitely felt much more seamless to me then MCPs.
I am also not sure if MCP will eventually be fixed to allow more control over context, or if the CLI approach really is the future for Agentic AI.
Nevertheless, I prefer the CLI for other reasons: it is built for humans and is much easier to debug.
binalpatel 1 hours ago [-]
100% - sharing CLIs with the agent has felt like another channel to interact with them once I’ve done it enough, like a task manager the agent and I can both use using the same interface
0x696C6961 4 hours ago [-]
MCP let's you hide secrets from the LLM
pylotlight 4 hours ago [-]
you can do same thing with cli via env vars no?
desireco42 5 hours ago [-]
Hey this looks cool. So each agent or session is one thread. Nice. I like it.
randito 6 hours ago [-]
If you look at Elixir keynote for Phoenix.new -- a cool agentic coding tool -- you'll see some hints about a browser control using a API tool call. It's called "web" in the video.
The main difference is likely the targeting philosophy. webctl relies heavily on ARIA roles/semantics (e.g. role=button name="Save") rather than injected IDs or CSS selectors. I find this makes the automation much more robust to UI changes.
Also, I went with Python for V1 simply for iteration speed and ecosystem integration. I'd love to rewrite in Rust eventually, but Python was the most efficient way to get a stable tool working for my specific use case.
"browser automation for ai agents" is a popular idea these days.
grigio 6 hours ago [-]
is there a benchmark? there are a lot of scraping agents nowdays..
cosinusalpha 5 hours ago [-]
I don't have an objective benchmark yet. I tried several existing solutions, especially the MCP servers for browser automation, and none of them were able to reproducibly solve my specific task.
An objective benchmark is a great idea, especially to compare webctl against other similar CLI-based tools. I'll definitely look into how to set that up.
desireco42 5 hours ago [-]
How are you holding session if every command is issues through cli? I assume this is essential for automation.
cosinusalpha 4 hours ago [-]
A background daemon holds the session state between different CLI calls. This daemon is started automatically on the first webctl call and auto-closes after a timeout period of inactivity to save resources.
desireco42 3 hours ago [-]
I see, nice. Is there a way to run multiple sessions?
Plus, now it is personal software... just keep asking it to improve the skill based on you usage. Bake in domain knowledge or business logic or whatever you want.
I'm using this for e2e testing and debugging Obsidian plugins and it is starting to understand Obsidian inside and out.
(my one of many contribution https://github.com/caesarnine/binsmith)
Nevertheless, I prefer the CLI for other reasons: it is built for humans and is much easier to debug.
Video: https://youtu.be/ojL_VHc4gLk?t=2132
More discussion: https://simonwillison.net/2025/Jun/23/phoenix-new/
https://github.com/rumca-js/crawler-buddy
More like a framework for other mechanisms
How is it different?
The main difference is likely the targeting philosophy. webctl relies heavily on ARIA roles/semantics (e.g. role=button name="Save") rather than injected IDs or CSS selectors. I find this makes the automation much more robust to UI changes.
Also, I went with Python for V1 simply for iteration speed and ecosystem integration. I'd love to rewrite in Rust eventually, but Python was the most efficient way to get a stable tool working for my specific use case.
"browser automation for ai agents" is a popular idea these days.
An objective benchmark is a great idea, especially to compare webctl against other similar CLI-based tools. I'll definitely look into how to set that up.