Rendered at 12:21:20 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
pdyc 3 hours ago [-]
do you use same accounts? how do you make sure that chatgpt/gemini etc. dont personalize the queries when used with same account?Also responses change based on location and ip(residetial ip's are treated differently)
clawbridge 12 hours ago [-]
Great point, this is the part of the conversation that doesn’t get enough attention.
It feels a lot like early cloud and DevOps all over again. The demo works, but the real question is whether you can trust it in production.
arunakt 8 hours ago [-]
Awesome, How is this different from GEO
vincko 7 hours ago [-]
It's not different from GEO. The actions we take all play into GEO.
onecommit 18 hours ago [-]
How do models deal with assessing the quality of content and its accuracy/veracity when recommending products currently? What do the providers do to avoid a situation where more content === more traffic? Would love to see links to relevant research on this, if you have them. much success to you, appreciate your ai slop risk awareness.
vincko 18 hours ago [-]
There is the preselection, which depends on the fanout queries the model comes up with and the contents performance across those queries on the search index.
After that content is actually assessed by the model. This paper tried different strategies to improve performance for this last step: https://arxiv.org/pdf/2311.09735. Adding statistics, sources, original data are all strategies that we apply.
In classic SEO, creating more and more content leads to "cannibalization". Generally this hurts performance of all overlapping content so much that it is not worth it.
onecommit 11 hours ago [-]
interesting - thanks!
Gobhanu 19 hours ago [-]
how do you track where users are coming from?
vincko 19 hours ago [-]
We currently simply integrate with your Google Analytics and filter by Source. This tends to be a lower bound, since it's not always set correctly. Coming from some of the native apps, users might be categorized as direct visitors.
There are other data sources we want to enable in the future like Cloudflare.
yunyu 19 hours ago [-]
What do you guys do differently than Profound or Airops?
vincko 19 hours ago [-]
That's a super valid question, we get it a lot. There are a lot of overlaps.
In our view Profound and Airops are aimed at existing marketing teams. Our goal is to be more hands-off, so you don't need a team. With many of our clients we act more like an agency, communicating via Slack and automating step by step. That's the experience we want to create. We aren't there yet though.
debarshri 19 hours ago [-]
Add peec to that list.
vincko 19 hours ago [-]
True, it is very competitive.
Our view on Peec is that it is an analytics solution. They recently did launch an actions feature. But they do not take any actions (yet). Creating content takes a lot of resources. And agencies are expensive.
As an analytics solution it is a good option.
methyl 17 hours ago [-]
And Surfer, the OG content optimization platform.
19 hours ago [-]
ceejayoz 19 hours ago [-]
Ugh. The worst of SEO, but a bunch more of it? Noooooo.
vincko 19 hours ago [-]
I get it, there is a lot of worry about slop.
We think about it like this: all of these agents will be most useful to users if they provide valuable answers. So they will be looking for valuable content for grounding their answer.
There are exploits, you can overfit on whatever they currently use as an objective function. But those tend to be temporary. So in the long run, valuable content will win. That's what we aim to create. It's a fine line.
ceejayoz 19 hours ago [-]
> all of these agents will be most useful to users if they provide valuable answers
This is a bald assertion.
vincko 19 hours ago [-]
Do you doubt the statement on how to maximize usefulness? Or do you mean that the companies behind the models might not optimize (exclusively) for usefulness to the user?
I do share doubts about the latter.
ceejayoz 18 hours ago [-]
> Do you doubt the statement on how to maximize usefulness?
Yes; the customer here is the site using it, not Google end users, who'll tend to accept whatever's the top search result even if it's deeply wrong or complete slop.
The wellbeing of search users isn't really the priority here, right?
vincko 18 hours ago [-]
Yes, that is correct. We help the brands, not the end user.
Let me try to rephrase the line of thinking:
To maximize value to the end user, the [AI search] models generally aim to be helpful. The companies building these models [OpenAI, etc.] are incentivized to make the model use helpful content.
Our goal is to be aligned with their objective function long term. And that incentivizes us to create helpful content.
Not all of this is a given. We don't know for sure how it will play out. There will always be ways to game the system. But we think those will get fixed over time.
Edit: added some clarifications on what I mean by "models"
ceejayoz 17 hours ago [-]
Let me rephrase, too.
> To maximize value to the paying customer, the models generally aim to be seen as helpful by Google's algorithm. The companies building these models are incentivized to make the model seem to use helpful content.
SEO does the same thing; the appearance of useful to Google is more important than the actual being useful to Google's visitors.
CloakHQ 15 hours ago [-]
[dead]
yolosollo 8 hours ago [-]
[dead]
vincko 7 hours ago [-]
It will be interesting to see which standard prevails. There are a lot of ideas in this space. WebMCP is one of them.
What I am wondering: irreversible actions also need to be communicated to human users. So wouldn't a site already communicate this in a way the agent understands? What is the advantage of a separate manual? Especially since it can go stale quickly.
Remi_Etien 19 hours ago [-]
[dead]
a13n 19 hours ago [-]
Please don't override the browser's default scroll behavior. It's so jarring and basically never a good idea.
vincko 18 hours ago [-]
Thank you for the feedback. We'll launch our new site soon where this is fixed.
abitabovebytes 18 hours ago [-]
[dead]
vahar 18 hours ago [-]
Regarding the topic of ambient agents, what’s the impact of your product? It’s hard for me to imagine the impact but I guess it must be a necessity if we have ambient agents to get discovered at all right? Nice to see a player from Europe on the market too!
vincko 17 hours ago [-]
Do you mean agents not answering short specific user prompts?
For those types of agents, prompt tracking is less accurate since the context of the queries is so large. But it's still relevant to understand what web searches they tend to perform and if you do show up in those.
That's another reason why we want to integrate other data sources, especially network logs.
It feels a lot like early cloud and DevOps all over again. The demo works, but the real question is whether you can trust it in production.
After that content is actually assessed by the model. This paper tried different strategies to improve performance for this last step: https://arxiv.org/pdf/2311.09735. Adding statistics, sources, original data are all strategies that we apply.
In classic SEO, creating more and more content leads to "cannibalization". Generally this hurts performance of all overlapping content so much that it is not worth it.
There are other data sources we want to enable in the future like Cloudflare.
In our view Profound and Airops are aimed at existing marketing teams. Our goal is to be more hands-off, so you don't need a team. With many of our clients we act more like an agency, communicating via Slack and automating step by step. That's the experience we want to create. We aren't there yet though.
Our view on Peec is that it is an analytics solution. They recently did launch an actions feature. But they do not take any actions (yet). Creating content takes a lot of resources. And agencies are expensive.
As an analytics solution it is a good option.
We think about it like this: all of these agents will be most useful to users if they provide valuable answers. So they will be looking for valuable content for grounding their answer.
There are exploits, you can overfit on whatever they currently use as an objective function. But those tend to be temporary. So in the long run, valuable content will win. That's what we aim to create. It's a fine line.
This is a bald assertion.
I do share doubts about the latter.
Yes; the customer here is the site using it, not Google end users, who'll tend to accept whatever's the top search result even if it's deeply wrong or complete slop.
The wellbeing of search users isn't really the priority here, right?
Let me try to rephrase the line of thinking:
To maximize value to the end user, the [AI search] models generally aim to be helpful. The companies building these models [OpenAI, etc.] are incentivized to make the model use helpful content.
Our goal is to be aligned with their objective function long term. And that incentivizes us to create helpful content.
Not all of this is a given. We don't know for sure how it will play out. There will always be ways to game the system. But we think those will get fixed over time.
Edit: added some clarifications on what I mean by "models"
> To maximize value to the paying customer, the models generally aim to be seen as helpful by Google's algorithm. The companies building these models are incentivized to make the model seem to use helpful content.
SEO does the same thing; the appearance of useful to Google is more important than the actual being useful to Google's visitors.
What I am wondering: irreversible actions also need to be communicated to human users. So wouldn't a site already communicate this in a way the agent understands? What is the advantage of a separate manual? Especially since it can go stale quickly.
For those types of agents, prompt tracking is less accurate since the context of the queries is so large. But it's still relevant to understand what web searches they tend to perform and if you do show up in those.
That's another reason why we want to integrate other data sources, especially network logs.