alt.hn

4/21/2025 at 7:01:55 PM

A M.2 HDMI capture card

https://interfacinglinux.com/2025/04/18/magewell-eco-m-2-hdmi-capture-card/

by Venn1

4/21/2025 at 10:37:00 PM

>Fortunately, those extra PCIe lanes tend to get repurposed as additional M.2 holes.

Or unfortunately, for the unlucky people who didn't do their research, so now their extra M.2 drives are sucking up some of their GPU's PCIe bus.

by viccis

4/21/2025 at 10:52:08 PM

The vast majority of people run just one gpu, which motherboards have a dedicated direct to CPU x16 slot for. Stealing lanes comes into play with chipset connected slots

by Numerlor

4/21/2025 at 11:53:57 PM

I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.

I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.

by zten

4/22/2025 at 12:48:08 AM

For a moment I didn't believe you, then I looked at the X870E AORUS PRO ICE (rev. 1.1) motherboard [1] and found this:

> 1x PCI Express x16 slot (PCIEX16), integrated in the CPU:

> AMD Ryzen™ 9000/7000 Series Processors support PCIe 5.0 x16 mode

> * The M2B_CPU and M2C_CPU connectors share bandwidth with the PCIEX16 slot.

> When theM2B_CPU orM2C_CPU connector is populated, the PCIEX16 slot operates at up to x8 mode.

[1]: https://www.gigabyte.com/Motherboard/X870E-AORUS-PRO-ICE-rev...

by ciupicri

4/22/2025 at 2:59:11 PM

IIRC, X870 boards are required to spend some of their PCIe lanes on providing USB4/Thunderbolt ports. If you don't want those, you can get an X670 board that uses the same chipset silicon but provides a better allocation of PCIe lanes to internal M.2 and PCIe slots.

by wtallis

4/22/2025 at 12:15:52 AM

Even with a Threadripper you're at the mercy of the motherboard design.

I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.

by elevation

4/22/2025 at 12:54:56 PM

I don't think Threadripper platform is to blame that you bought a board with potentially the worst possible pcie lane routing. Latest generation has 88 usable lanes at minimum, most boards have 4x 16x, and Pro supports 7x Gen 5.0 x16 links, an absolutely insane amount of IO. "At the mercy of motherboard design"- do the absolute minimum amount of research and pick any other board?

by grw_

4/22/2025 at 5:30:05 AM

You're using 100GbE ... in an end-user PC? What would you even saturate that with?

by nrdvana

4/22/2025 at 8:23:25 AM

I wouldn't think it's about saturating it during normal use; rather, simply exceeding 40 Gbit/s, which is very possible with solid-state NASes.

by aaronmdjones

4/22/2025 at 7:42:04 PM

Okay, but then I need to ask what kind of use case doesn't mind the extra latency from ethernet but does care about the difference between 40Gbps and 70Gbps.

by Dylan16807

4/22/2025 at 3:55:08 AM

Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most

[0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.

Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"

[0] https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-v...

by kimixa

4/22/2025 at 7:30:08 AM

>I wish you didn't have to buy Xeon

But that's the whole point of Intel's market segmentation strategy - otherwise their low-tier workstation Xeons would see no market.

by dur-randir

4/22/2025 at 5:25:33 AM

I wonder how this works. I'm typing this on a machine running an i7-6700K, which, according to Intel, only has 16 lanes total.

It has a 4x SSD and a 16x GPU. Their respective tools report them as using all the lanes, which is clearly impossible if I'm to believe Intel's specs.

Could this bifurcation be dynamic, and activate those lanes which are required at a given time?

by vladvasiliu

4/22/2025 at 5:46:12 AM

For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.

But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?

by toast0

4/22/2025 at 8:25:20 AM

Indeed. I dug out the manual (MSI H170 Gaming M3), which has a block diagram showing the M2 port behind the chipset, which is connected via DMI 3 to the CPU. In my mind, the chipset was connected via actual PCIe, but apparently, it's counted separately from the "actual" PCIe lanes.

by vladvasiliu

4/22/2025 at 3:02:37 PM

Intel's DMI connection between the CPU and the chipset is little more than another PCIe x4 link. For consumer CPUs, they don't usually include it in the total lane count, but they have sometimes done so for Xeon parts based off the consumer silicon, giving the false impression that those Xeons have more PCIe lanes.

by wtallis

4/21/2025 at 10:54:10 PM

The real PITA is when adding the NVMe disables the SATA ports you planned to use.

by doubled112

4/22/2025 at 2:38:15 AM

Doesn't this usually only happen when you put an M.2 SATA drive in? I've never seen a motherboard manual have this caveat for actual NVMe M.2 drives. And encountering an M.2 SATA drive is quite rare.

by creatonez

4/22/2025 at 5:18:26 AM

I have a spare-parts NAS on a Z170 (Intel 6k/7k) motherboard with 8 SATA ports and 2 NVME slots - if I put an x2 SSD in the top slot it would disable two ports, and if it was an x4 it would disable four! Luckily the bottom m2 slot doesn’t conflict with any SATA ports, just an expansion card slot. (The board supports SATA Express even - did anything actually use that?)

SATA ports are far scarcer these days though and there’s more PCIE bandwidth available anyways, so it’s not surprising that there aren’t conflicts as often anymore.

by ZekeSulastin

4/22/2025 at 5:56:52 AM

Nope, for AM5, both of the available chipsets[1] have 4 serial ports that can be configured as x4 pci-e 3.0, 4x sata, or two and two. I think Intel does similar, but I haven't really kept up.

[1] A620 is cut down, but everything else is actually the same chip (or two)

by toast0

4/22/2025 at 6:17:41 AM

As some others have pointed out, there are some motherboards in which, if you use M.2 cards on the wrong slot, it will turn your 16x GPU slot into 8x.

by viccis

4/22/2025 at 12:49:14 PM

Luckily unless you're doing something odd with your GPU it isn't using the bandwidth and won't lose significant performance either way.

Steve from gamers nexus tests new GPUs on older PCIe gens and the difference is negligible. And since PCIe always doubled bandwidth with generations it's effectively the same as running on half bus speed.

I run an Intel A380 for Linux and a NVIDIA 3060 for a Windows VM (I'm a bit cheap). I opted for using some Intel SATA6 datacenter drives we decommissioned from work over using more PCIe for storage, but the performance is outstanding.

Modern game engines don't need all those gigabytes per second. If you're doing AI maybe it matters, but then you probably hopefully maybe won't cheap out on consumer CPUs with 20 PCIe lanes either.

by carlhjerpe

4/22/2025 at 2:55:34 PM

Low-end GPUs often only have eight PCIe lanes to begin with, sometimes because they're using chips that were designed more for laptop use than for desktop cards. Intel Arc A380 and B580, AMD 7600XT, NVIDIA 4060Ti and 5060Ti are all eight-lane cards.

by wtallis

4/23/2025 at 5:22:33 AM

I do really think we're due for an expansion refactor. 75 Watts is an awkward amount of power and PCBs are worse than wires for data throughout and signal integrity. It feels like GPUs would be much happier if we just had a cable to plug into the motherboard that transferred data and no power and which didn't force the GPU to hang off the side of the motherboard and break during shipping

by adgjlsfhk1

4/21/2025 at 11:08:44 PM

New chipsets have become PCIe switches since broadcom rug pulled the PCIe switch market.

by throwaway48476

4/22/2025 at 1:46:35 AM

I wish someone bothered with modern bifurcation and/or generation downgrading switches.

For homelab purposes I'd rather have two Gen3 x8 slots than one Gen5 x4 slot, as that'd allow me to use a (now ancient) 25G NIC and a HBA. Similarly I'd rather have four Gen5 x1 slots than one Gen5 x4 slot, as Gen5 NVMe SSDs are readily available and even a single Gen5 lane is enough to saturate a 25G network connection and it'd allow me to attach four SSDs instead of only one.

The consumer platforms have more than enough IO bandwidth for some rather interesting home server stuff, it just isn't allocated in a useful way.

by crote

4/22/2025 at 11:57:03 AM

gen5 does not exist in x1, that is IP issue.

other issues you mention are only "firmware disabled". chipsets are capable of bifurcation, so maybe try to visit some chinese / russian firmware hacking forums... i found out that server and consumer icelake generation is fully "cracked" open on there for years. there you can find all sorts of "BIOS" generators which can generate "BIOS" of your liking.

cheapest way to do what you describe (after fiddling with your "BIOS", "firmware") is by buying NVME HBA, which are just renamed PCIE switch ICs.... ;) brand name PCIE4 switch can be bought for 1000-1200 dollars retail, it will allow you to bifurcate to x1 but im not sure about prices for pcie5.

or if you want more costly but out of box working device try looking at [ https://www.h3platform.com/product-detail/Topology/24 ]

and do not forget that newer intel PC have USB with 40 gbps... so do you really need 25 gbps eth ? linux / BSD does not care what you transfer over USB (ETH/ipv6 encapsulated in USB)...

EDIT: you can buy HBA with 8 x4 connectors. this HBA/switch is connected to your pc over PCIE4 x8... 8 times 4 = 32 =x8 port... so you get x1 speeds, you can connect only one lane on x4, etc. out of box thinking.

by Calwestjobs

4/22/2025 at 3:52:38 PM

installing BIOS from chinese / russian forums might be the last thing one would ever want to do, though

by antonkochubey

4/22/2025 at 9:46:47 PM

of course, it was more like "go ask to countries which does not care about IP/NDAs" they can post it freely. you can always copy what they do even without using whole code/image/bin.

by Calwestjobs

4/22/2025 at 4:35:57 PM

I'll host them on a US/EU forum. Better now?

by FirmwareBurner

4/21/2025 at 11:16:51 PM

>broadcom rug pulled the PCIe switch market.

What does this mean? Did they jack up prices?

by gruez

4/22/2025 at 12:06:23 AM

Avego wanted PLX switches for enterprise storage, not low margin PC/server sales.

Same thing that Avego did with Broadcom, LSI, Brocade etc... during the 2010's, buy a market leader, dump the parts that they didn't want, leaving a huge hole in the market.

When you realize that Avego was the brand produced when KKR and Silver Lake bought the chip biz from Agilent, it is just the typical private equity play, buy your market position and sell off or shut down the parts you don't care about.

by nyrikki

4/22/2025 at 8:46:21 AM

When the LSI buy happened, they dumped massive amounts of new, but ptior gen stock into the distributor channel, then immediately claimed them EOL.

Scumbags.

by bbarnett

4/22/2025 at 11:26:05 AM

you can run your GPU in x4 and your eyes wont see difference,

EDIT: yes.

by Calwestjobs

4/22/2025 at 11:25:03 AM

another insane thing is having 40 Gbps USB ports used as a networking cable between two PCs... Why there is still no project which can do this by simply installing trivial driver (in windows) and 4 line patch to enable this in linux? just charge 2 dollars per year and youll be milionaire (until LTT / MS catches up and steals it from you )

It is even possible to have linux machine act as a display port sink to be used as a capture card, for streamers, youtubers,... with 0 dollar investment, 0 hardware...

by Calwestjobs

4/22/2025 at 8:08:39 PM

> Why there is still no project which can do this by simply installing trivial driver (in windows)

The driver has been built in to Windows for years, as it's the same tech as Thunderbolt networking. Just plug two Thunderbolt or USB4

https://learn.microsoft.com/en-us/windows-hardware/design/co...

> and 4 line patch to enable this in linux?

Supported since kernel 4.15 in early 2018.

https://www.kernel.org/doc/html/v4.15/admin-guide/thunderbol...

> It is even possible to have linux machine act as a display port sink to be used as a capture card, for streamers, youtubers,... with 0 dollar investment, 0 hardware...

This is not true. When a USB-C connection is being used in DisplayPort Alternate Mode two or four of the high-speed pairs in the cable are literally switched from being connected to the USB controller to being connected to the DisplayPort controller. Both the source and sink devices have to support the same alternate modes to operate, so you would still need to have actual DisplayPort sink hardware to capture a DP-on-USBC signal. DisplayPort capture hardware is rare on its own compared to HDMI capture, I don't think there's a single one out there that takes a USB-C DP Alt Mode input because there's no real reason to ever do that.

by wolrah

4/22/2025 at 8:53:32 PM

win & linux - great ! Thank you. unfortunately not a lot of AMD systems with USB4 hw :/

display port sink - ah ok i thought it is some sort of tunneling. but it is just mux/switch + retimer. so only one way.

by Calwestjobs

4/22/2025 at 9:02:01 PM

And for what it's worth, Macs have had this ability for basically as long as they've had Thunderbolt. The same can also be done over Firewire on all major platforms, but these days 400/800mbps just isn't what it was back in the day.

edit: Also meant to mention that while Thunderbolt has always been rare on AMD due to its requirement of an Intel chip, USB4 is somewhat common nowadays as it's officially supported on the Zen 3+ and newer CPUs

by wolrah

4/22/2025 at 9:42:21 PM

yeah i had mac and had to clone disk over tb one time.

but mac recovery is magical, wifi password is written in NVRAM and if you erased/replaced whole disk, "recovery agent" started and downloaded macos image over wifi and installed it ! it was on intel mac, im not sure if it still works same way in arm. it took 2 hours XD

apples firewire and tb had problems with security, it was possible to do DMA reads from your ram over those. from external device.

by Calwestjobs

4/22/2025 at 11:39:36 PM

The DMA security issue was an Intel problem more than an Apple problem. It was Apple's popularization of Thunderbolt that got Intel to finally stop disabling IOMMU support for product segmentation reasons.

by wtallis

4/23/2025 at 9:42:36 AM

security issue was Apple thing, they say security, privacy but have tb, firewire with posibility of "TSA" agents cloning your drive in 3 minutes. has nothing to do with any product segmentation. also they HAD icloud backups not encrypted, but had big public fight with FBI about unlocking ones iphone... good old days.

by Calwestjobs

4/24/2025 at 8:14:00 PM

> security issue was Apple thing

Not true; basically everyone shipping laptops with Firewire ports in that era was at risk of DMA attacks. Apple was merely the most notable vendor trying to offer something better than USB 2.0.

> has nothing to do with any product segmentation

Go look at which Intel Sandy Bridge or Ivy Bridge processors had VT-d (IOMMU) capability enabled vs disabled. The product segmentation strategy is on plain display. They made overclocking mutually exclusive with IOMMU for several generations of processor. There's no technological basis for that, just artificial product segmentation.

Please put at least a little bit of effort into fact-checking yourself before continuing ranting. It's not that hard, and you'll be much more convincing if you don't exaggerate your complaints to the point of being obviously wrong.

by wtallis

4/22/2025 at 7:17:47 PM

There have been many such attempts, but none ever really took off.

I think the core problem is that "transferring bulk data directly between two computers" is not a problem that many people actually have. At least, not in a way that isn't solved by other means like an intermediate USB drive or simply waiting for your conventional network to do it.

In reality, what you propose can be done with zero software: simply put two USB network adapters back to back inside your cable. That's traditionally how this has been done. If you want a raw USB connection, one device must support USB device mode, which is extremely rare on PCs. I don't know if you can fix this in userland, or even the kernel. My bet is that this is a hardware/firmware level feature, though I don't know for sure.

by mystified5016

4/22/2025 at 9:34:34 PM

i was thinking about something like using NAS to USB dock my notebook / tablet.

power delivery (even thru some kind of splitter)

+ 20/40gbps network for syncing AND internet (wifi is constantly down rating connection which does annoying delays, latencies, i live in high rise...)

by Calwestjobs

4/22/2025 at 7:08:23 PM

It is somewhat possible but there's rumors that Intel intentionally nerfs the speed so as not to compete with their datacenter products. It could also just be some other resource limit in the hardware, you usually aren't saturating a thunderbolt connection with just one type of data stream. This guy built a small ring topology network using mini PCs and thunderbolt 4.

https://fangpenlin.com/posts/2024/01/14/high-speed-usb4-mesh...

The distance isn't great for USB/Thunderbolt 4 cables so it would really only make sense for cluster computing and since no switches exist for this kind of thing, you're limited in network topology. Generally, the kind of PCs that you'd need a fat pipe connected to already have the slots and lanes available for 40G or 100G cards. It's a cool experiment for the homelab but it's clear the industry isn't interested.

by wildzzz

4/22/2025 at 9:28:24 PM

im not sure about intel limiting. a lot of times people forget to calculate encoding and/or packet headers into raw speeds. or something like that.

or in that article it is possible usb4 controller is sharing bandwidth with some other device or it is only 20gbps PER controller and he has connected 2 cables to same computer in middle. so 10 IN + 10 OUT is 20gbps ? maybe if he tries to measure speeds for only neighboring machines it would be 20gbps ? but his speed is 11.8 so does not make sense either. i do not know.

i was thinking for home use, when i come home with notebook i can quickly sync it to nas. both have NVME drives so 3-4 GBps should be possible. especially video. also not needing wifi, because im connected thru same cable im charging is awesome.

also recovery/ssd cloning is nicer on quick connection.

virtual machines can stay on nas too with those speeds. no need to copy.

by Calwestjobs

4/22/2025 at 2:39:47 PM

>Why there is still no project which can do this

>just charge 2 dollars per year and youll be milionaire

I don't know, why don't you do it? Sounds like easy money.

by poincaredisk

4/22/2025 at 9:36:38 PM

there is not enough people to do stuff, in any profession.

by Calwestjobs

4/22/2025 at 11:58:13 AM

USB isn't very reliable in my experience. It could work if you add your own error correction on top.

by amelius

4/22/2025 at 12:34:15 PM

You can deal with that on another OSI Models layer. Just encapsulate/transfer something which already has FEC. not sure if wireguard has fec

by Calwestjobs

4/22/2025 at 7:11:21 PM

You want your FEC at the lowest level possible in the OSI model so errors are detected early in the stack and you resend as few packets as possible.

by wildzzz

4/22/2025 at 12:31:42 PM

LTT?

by taskforcegemini

4/22/2025 at 12:34:55 PM

youtuber which spreads marketing misiinformation and laugh into your face on live stream while doing it.

by Calwestjobs

4/21/2025 at 10:12:59 PM

If I wanted to capture something with HDCP, what’s the most straightforward path to stripping it away?

by 0cf8612b2e1e

4/21/2025 at 10:47:47 PM

HDFury has multiple devices that can do it, but they are fairly expensive. Many of the cheap HDMI 1x2 splitters on Amazon also strip HDCP on the secondary output. You can check reviews for hints.

by kodt

4/21/2025 at 10:25:57 PM

There are various splitters and mixers that have the necessary edid/hdcp emulator functions.

I don't know if anybody has managed to figure out how to defeat hdcp higher than 1.4 though.

by baby_souffle

4/21/2025 at 10:28:22 PM

> I don't know if anybody has managed to figure out how to defeat hdcp higher than 1.4 though.

This works for me: https://www.amazon.com/dp/B08T64JWWT

by mistersquid

4/21/2025 at 10:47:54 PM

Aside from the high number of 1-star reviews complaining about the gadget dying fast - how in god's name is this thing still selling assuming it can actually strip HDCP for modern HDMI standards?

I'd have expected HDMI LA to be very very strict in enforcing actions against HDCP strippers. If not, why even keep up the game? It's not like pirates can already defeat virtually all copy protection mechanisms on the market, even before HDCP ever enters the field.

by mschuster91

4/21/2025 at 11:30:55 PM

> Aside from the high number of 1-star reviews complaining about the gadget dying fast - how in god's name is this thing still selling assuming it can actually strip HDCP for modern HDMI standards?

How is 1 review a "high number of 1-star reviews"?

There are a total of 32 reviews for this device, 2 of which are 1-star reviews. Only one of those warns "Stopped working in 5 minutes". The other 1-star review notes (in translation) "When I tried this device, I got another very bad device at a lower price".

I'm not sure what your expectation that "HDMI LA to be very very strict in enforcing actions against HDCP strippers" means in this context. Indeed, your second paragraph seems to be an expression of consternation that manufacturers would go through the trouble of implementing HDCP given how easily it can be circumvented.

by mistersquid

4/22/2025 at 12:17:54 AM

> I'm not sure what your expectation that "HDMI LA to be very very strict in enforcing actions against HDCP strippers" means in this context.

It used to be the case that HDMI LA would act very swiftly on any keybox leaks and revoke the certificates, as well as pursuing legal actions against sellers of HDCP strippers. These devices were sold by fly-by-night eBay and darknet sellers, not right on the storefront of Amazon.

> Indeed, your second paragraph seems to be an expression of consternation that manufacturers would go through the trouble of implementing HDCP given how easily it can be circumvented.

Manufacturers do because HDCP is a requirement to even be allowed to use the HDMI trademark, in contrast to DisplayPort. I was referring to HDMI LA and the goons of the movie rightsholder industry that insist on continuing this pointless arms race.

by mschuster91

4/22/2025 at 12:35:42 AM

Not even the finest lawyers can keep up with fly by night marketplace suppliers with company names that are just random letters

by Havoc

4/22/2025 at 12:35:19 PM

> how in god's name is this thing still selling assuming it can actually strip HDCP for modern HDMI standards?

These random letters brands stores come and go so quickly that I guess they sell under a different name by the time your lawyer had the time to send a letter.

by prmoustache

4/22/2025 at 2:17:08 AM

As with the vast majority of products on Amazon, you could probably find the same on Aliexpress/baba for less.

by userbinator

4/22/2025 at 3:11:06 AM

If you're in the United States you can expect hefty brokerage fees and tariff charges for anything arriving internationally starting on May 2nd.

If the Amazon listing ships from the United States it's a better choice now.

by Aurornis

4/22/2025 at 11:37:46 AM

My USB HDMI capture thing removes it. Wouldn't be surprised if the other no name ones did the same. It was only like $20 too.

by goosedragons

4/22/2025 at 12:15:03 AM

Looking for a way to freeze an HDMI feed so that the current image (a PPT slide) stays up on the projector/TV while edits are made. Any suggestions welcome.

by peterburkimsher

4/22/2025 at 12:23:10 AM

https://magicmusicvisuals.com/ combined with https://obsproject.com/. And possibly OBS all by itself.

by coolhand2120

4/22/2025 at 6:56:51 AM

Thanks, I'm aware of OBS. Ideally I'm hoping for an embedded system solution, rather than adding more software to the workflow.

by peterburkimsher

4/22/2025 at 1:17:36 PM

I'm not saying this is the most practical or affordable solution compared to a more specialized product, but if you definitely want a hardware solution, I do know (depending on your HDMI spec requirements) this would be achievable with any FPGA dev board that has HDMI In/Out. I can't say I've worked on this exact solution, but I did some basic development with a Xilinx (Z7?) board years ago that involved overlaying and modifying an incoming HDMI signal. What you're looking for should be achievable by the same means assuming there's no signal issues I'm failing to account for.

by mynameajeff

4/22/2025 at 6:37:55 PM

So kind of like the old freeze frame feature on TVs?

by wildzzz

4/22/2025 at 7:58:12 PM

Yes

by peterburkimsher

4/21/2025 at 11:11:58 PM

Looking for a way to show an image over HDMI while my embedded system is booting, and then (seamlessly) switching over to the HDMI output of that system when booting finishes. Any ideas how to accomplish that? In hardware, of course.

by amelius

4/26/2025 at 4:34:38 AM

It really depends what embedded system (and display) you have.

Most "reputable" embedded system/board vendors in the x86 space are going to offer a firmware customization tool, with one of the primary uses being so that you can brand the splash screen at startup. That gets you an image as soon as the firmware brings up the display controller, which should be very fast (i.e. well under a second). YMMV for less supported embedded systems, particularly those with more complicated/proprietary display bringup.

Depending on the display in question (assuming you can control the display in use), you may have the ability to configure a splash screen in the display controller itself (the display controller vendor can certainly do this, whether or not they expose it to you is the question).

The most complicated (but also most flexible) option is to get whatever the simplest FPGA that can support a HDMI TX and a HDMI RX for the display resolution you desire. Then you build whatever splash screen you want into the FPGA firmware, and whatever boot-finished detection you want to handle the switch from internal splash image to HDMI RX. You'll get the splash screen essentially as soon as your FPGA and HDMI TX power on. If you really know what you're doing, you could even get HDCP passthrough from the embedded system to the display to work here, although you probably can ignore that.

by cpgxiii

4/22/2025 at 12:11:00 AM

Seems to me like the answer is to get the splashscreen going earlier in your boot process. If you know the display geometry, I suspect you can skip the DDC read and just hardcode the mode and stuff, which should save even more time.

by myself248

4/22/2025 at 10:00:40 AM

I asked for a hardware solution. On some embedded systems there is an entire hierarchy of proprietary crapware preventing access to the display controller, and I just don't want to look into it because chances are it is futile anyway and they might change it next year or I will move to a different platform and I'll have to reinvent this again, and again. Hence hardware.

by amelius

4/21/2025 at 11:42:41 PM

There are a bunch of HDMI switches with buttons on them, and some with remotes. Doesn't seem too outlandish to rig these buttons or remotes to be controlled by the computer itself.

by actionfromafar

4/21/2025 at 11:54:56 PM

Yeah, I've looked into them, but I still need to generate the second image and these switches typically don't provide a seamless transition, so it's not optimal.

by amelius

4/21/2025 at 11:46:49 PM

[dead]

by pinoy420

4/22/2025 at 5:50:01 AM

absolutely wild seeing all the ways people stretch these boards tbh - kinda makes me wanna mess with my own setup more

by gitroom

4/22/2025 at 1:53:58 PM

I wish they’d never put a . in the name

by MortyWaves

4/22/2025 at 5:48:52 AM

>PCIe slots are becoming an endangered species on modern motherboards

Except... not at all? Just about any ATX-sized motherboard is going to have a full-sized X4 slot and a small X1 slot _in addition_ to the X16 one.

And with decent audio and 2.5GBps ethernet PHY on board even those slots often sit disused.

I mean, want to test goofy hardware - go for it, no need to invent a justification.

by puzzlingcaptcha

4/22/2025 at 8:23:19 AM

Except most motherboards I see in the wild are mATX. They're more common and cheaper in stores, also basically every pre-built comes with mATX.

by franga2000

4/22/2025 at 6:41:01 PM

Any anyone actually requiring more than just a single GPU and maybe a sound card probably has some pretty specific compute needs that would require something other than a cheap pre built.

by wildzzz

4/22/2025 at 8:40:01 AM

I'd say that proves the point, doesn't it?

Those Gen3 x1 slots are limited to 7.88Gbps so they are pretty pointless for anything beyond a sound card, and you only get a single Gen4 x4 slot. All the other x4 connections are taken up by M.2!

Want to add a capture card and a HBA? Not enough slots. Want to add a capture card and a 10G/25G NIC? Not enough slots. Want to add a capture card and a USB-C expansion card? Not enough slots.

by crote

4/22/2025 at 3:15:00 PM

> Those Gen3 x1 slots are limited to 7.88Gbps so they are pretty pointless

This reviewer tested the M.2 capture card with two 1080p60 signals that have a data rate of 3.2Gbps each, so a PCIe gen3 x1 slot would have been plenty of bandwidth for this use case (but not for capturing at 4k).

Don't make the mistake of believing that a device which has lots of PCIe lanes actually needs all of them active to fulfill its purpose and be useful.

by wtallis

4/22/2025 at 11:43:36 AM

what exactly is this good for?

by gatnoodle

4/21/2025 at 8:37:01 PM

[dead]

by aaronday