SQLGene's profile picture. Data training for busy people. Power BI Consultant. Pluralsight author. He/Him.
Mast: @Sqlgene@techhub.social
Bsky: @sqlgene.com

Eugene Meidinger

@SQLGene

Data training for busy people. Power BI Consultant. Pluralsight author. He/Him. Mast: @[email protected] Bsky: @sqlgene.com

Eugene Meidinger 님이 재게시함

I love Substack. Always have. Their team is great. But a silent change could force me off the platform if it stays. They broke email. My paid subscribers cannot read today's paid newsletter on mobile without downloading the Substack app. @SubstackInc: roll this back. Now.

GergelyOrosz's tweet image. I love Substack. Always have. Their team is great.

But a silent change could force me off the platform if it stays.

They broke email.

My paid subscribers cannot read today's paid newsletter on mobile without downloading the Substack app.

@SubstackInc: roll this back. Now.

Eugene Meidinger 님이 재게시함

The moment Anthropic releases a visual Skill Builder UI (like a Zapier for AI skills), adoption will hockey stick


Eugene Meidinger 님이 재게시함

a very good defcon sticker

TracketPacer's tweet image. a very good defcon sticker

Eugene Meidinger 님이 재게시함

A lot of discussion on open weights models seems to assume there is a clear incentive for building them. I don’t see how is the case. Unless you have no need for money (government sponsored?), there are no real ways to capture value from your model even as model cost increases.


So let me get this straight, if I want to merge a hyper-v checkpoint, I have to delete it. But if I want to revert any changes, I have to apply it?


Eugene Meidinger 님이 재게시함

This is what I mean by "who builds this platform" It's as if there was no one doing product No comms and a big part of the platform just - woosh! - disappears. Or maybe it's there. Or who knows x.com/ATurnblad/stat…

I had to create a passcode and then they were shown



Eugene Meidinger 님이 재게시함

WTF all my DMs are gone on X?! Who builds this platform Case study on how to turn something that used to be trusted to unstrustworthy


Really interesting to see MCP servers being used to support business users, similar to self-service BI newsletter.pragmaticengineer.com/p/mcp-deepdive


Eugene Meidinger 님이 재게시함

PSA: if you're a marketing agency selling Reddit-astroturfing as a service, you are actively poisoning the communities you seek to exploit for your clients. You are wasting the mod's time cleaning up this shit. You are adding to the enshittification of the internet. You are…


Eugene Meidinger 님이 재게시함

LinkedIn feels really desperate trying to upsell AI products, everywhere. “Rewrite with AI” on the bottom of my every post (and so my LI feed is full of AI slop) Now recommending doing hiring with AI - when LI Jobs inbound is already flooded with AI applications and thus mostly…

GergelyOrosz's tweet image. LinkedIn feels really desperate trying to upsell AI products, everywhere. “Rewrite with AI” on the bottom of my every post (and so my LI feed is full of AI slop)

Now recommending doing hiring with AI - when LI Jobs inbound is already flooded with AI applications and thus mostly…

Eugene Meidinger 님이 재게시함

Same day we've published our MCP deepdive in @Pragmatic_Eng (the on-the-ground realities of building and using MCP servers, the good and the bad) The deepdive: newsletter.pragmaticengineer.com/p/mcp-deepdive

GergelyOrosz's tweet image. Same day we've published our MCP deepdive in @Pragmatic_Eng (the on-the-ground realities of building and using MCP servers, the good and the bad)

The deepdive: newsletter.pragmaticengineer.com/p/mcp-deepdive

Anthropic is donating the Model Context Protocol to the Agentic AI Foundation, a directed fund under the Linux Foundation. In one year, MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. anthropic.com/news/donating-…



I wonder why they call it --dangerously-skip-permissions

Important reminder from Reddit here of the risk you're taking when you run Claude Code with --dangerously-skip-permissions "I found the problem and it's really bad [...] rm -rf tests/ patches/ plan/ ~/ - See that ~/ at the end? That's your entire home directory."

simonw's tweet image. Important reminder from Reddit here of the risk you're taking when you run Claude Code with --dangerously-skip-permissions

"I found the problem and it's really bad [...] rm -rf tests/ patches/ plan/ ~/
 - See that ~/ at the end? That's your entire home directory."


Eugene Meidinger 님이 재게시함

Former director of Microsoft Office:

Everyone had access to the same internet. Everyone had access to the same PCs. Every bank had access to Excel. Visiting a big investment bank customer ages ago, they told me "you know, we make more money from Excel than Microsoft does". That put me in my place.



Eugene Meidinger 님이 재게시함

As long as prompting/context engineering/AI skill matters, agents may actually serve to increase differences in outcomes among people, rather than reduce them.

What will economic outcomes look like as transactions become delegated to AI agents? Will human differences be smoothed away, leading to more homogenous outcomes, or will they be recreated and potentially even amplified? Will AI agents mitigate inequality, or will it persist…

alexolegimas's tweet image. What will economic outcomes look like as transactions become delegated to AI agents?

Will human differences be smoothed away, leading to more homogenous outcomes, or will they be recreated and potentially even amplified?

Will AI agents mitigate inequality, or will it persist…


Let's see what all the hubbub is about.

SQLGene's tweet image. Let's see what all the hubbub is about.

Eugene Meidinger 님이 재게시함

It's time to retire this idea of persona prompting - don't tell the LLM to act like X, tell it that it should answer for an audience of Y instead

We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.

emollick's tweet image. We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate

We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.
emollick's tweet image. We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate

We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.


Eugene Meidinger 님이 재게시함

We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.

emollick's tweet image. We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate

We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.
emollick's tweet image. We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate

We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.

If you are using AI images with typos, it shows a distinct lack of pride and care in your work.


Loading...

Something went wrong.


Something went wrong.