What do you call this type of repair?

thecomputerguy

Well-Known Member
Reaction score
1,414
Occasionally a client will bring a system in for NO POST black screen. Obviously with those symptoms it could be just about anything. The last system brought in essentially ended up being fixed by removing GPU, RAM, PSU Connectors letting it sit. Plugging it back in and it was fixed.

This type of repair seems to happen once in awhile and I just call it a "Power Lock" like power is stuck in the system somewhere locking it up.

In this case removing any one of the above troubleshooting methods didn't solve the issue. After I removed all three then put in a test video card and 1 stick of RAM it booted.

I returned the system back to when it was delivered and it booted up fine with all the original components.

A lot of times in cases like this the repair will last for the life of the computer.
 
I call it one hour labor? Not sure the point of the question.

Well yeah but I'm just trying to understand what this is or why this happens I guess so that I can better explain it to the client other than ... "I took your garbage apart, and put it back together and somehow it started working again".

It's just not as black and white as say like a HD or GPU failing.
 
Because they are poorly and cheaply built so parts don’t fit together well. Heat causes expansion so parts wiggle out of their not so tight sockets. You could build a PC that would never have this issue but you’d add two zeros to the end of the price tag for the military grade hardware.
 
This type of repair seems to happen once in awhile and I just call it a "Power Lock" like power is stuck in the system somewhere locking it up.
I'm at a loss to explain those things too. I may have even posted a similar thread here, I seem to recall. These kinds of problems / fixes seem to be less frequent these days.
 
I explain these to the customer as bad connections caused by environmental conditions: temperature fluctuations (e.g. here right now temps can be as low as 2C overnight and as high as 22C in the day); humidity/condensation (I see this more on computers that are switched off at the wall for long periods); salt air near the sea in combination with the other things.

It's usually the RAM modules that get the bad connections, 155 contacts per module, up to 4 modules, higher odds of these connection issues.
 
@nlinecomputers said:
"Because they are poorly and cheaply built so parts don’t fit together well. Heat causes expansion so parts wiggle out of their not so tight sockets."

Agreed

I've used "percussion maintenance" successfully on more than one occasion.

Millions of "smack the damn thing ...it'll work" experts in this field cant be wrong.

😉:p


But seriously, I agree that environmental issues play a part especially in environments where temperature fluctuation are large.
I also have had many occasions where people bump, move, drop etc a device and simply re-seating components or unplugging/plugging in cables has fixed it.
 
We have pretty stark temperature swings here too, but unless the machine is being moved for the most part they continue working.

The only thing I ever have to do is reseat the RAM, and that's typically right after a unit was delivered.
 
We have pretty stark temperature swings here too,

I realize you're talking about ambient temperatures, but I'm using temperature as a launching point. It amazes me how people have become absolutely paranoid about "running hot" when a machine is not running anywhere near to hot. Most modern processors have T-max of 100 degrees C, sometimes a little higher, and people are wailing and gnashing teeth when they see spikes of 85 - 90 degrees under stress and idling temperatures between 45 - 55 degrees. None of that constitutes running hot.

It's not unusual for single processors in multi-processor chips to spike above T-max, very briefly, when the machine is under heavy load when it's brand, spankin' new and completely free of dust in the cooling system.

You aren't running hot even if you're running at 85 degrees, consistently (which is very unlikely under light use), as 15 degrees C below T-max is well below "hot."

And if you've ever worked in environments like restaurants, particularly fast food ones, and seen what the inside of machines get like there, you come to realize that "a little dust" is of zero concern. I've seen machine interiors that should have caught fire from dust and grease, but didn't, and clearly had been chugging along just fine for ages (the complaint was not that the machine was overheating and shutting down unexpectedly). Modern processors are not shrinking violets/delicate flowers when it comes to running temperatures.
 
@britechguy I blame HP and Dell for that one, a decade of BIOS updates dorking with fan profiles to "fix" machines that cooked themselves to death.

Also, we had a longer length of time where CPUs stayed below 70C when under load. I'm not a huge fan of letting machines get above 80C myself, but that has more to do with the machine belching forth the fires of Tartarus into the room I'm sitting in than the machine being healthy. People want PCs, not space heaters!
 
@Sky-Knight

I'm with you as far as not wanting a computer to double as a space heater, but, I have absolutely no choice in that matter. The CPUs/APUs run at the designed temperature range, and that range has become increasingly warmer over the years. I don't see this changing, ever.

Most times, when anything occurs over which you have zero control, going with the flow is not only the best, but the only, option. Go with the flow.
 
@Sky-Knight

I'm with you as far as not wanting a computer to double as a space heater, but, I have absolutely no choice in that matter. The CPUs/APUs run at the designed temperature range, and that range has become increasingly warmer over the years. I don't see this changing, ever.

Most times, when anything occurs over which you have zero control, going with the flow is not only the best, but the only, option. Go with the flow.
That's not true! You can use upgraded cooling systems which will keep the device at a lower operating temperature. The CPU and GPU also are not the only components in the system, and other components may or may not have similar thermal tolerances.

But, there's a reason why I push DESKTOPS here, because those have enough space for me to put in an upgraded cooling solution if the situation calls for it. Meanwhile, laptops continue to suffer early thermal death because the cooling systems are simply not designed to be operated in environments north of 82F.
 
The CPUs/APUs run at the designed temperature range, and that range has become increasingly warmer over the years. I don't see this changing, ever.
I do because customers are demanding it. We've reached a plateau of computing power. Faster CPUs are of little benefit so innovation has to come in the form of lighter power consumption. This both increases battery life AND reduces temperature generation.
 
With the Intel Atom having T-max of 90 degrees C, I stick by my assessment.

It's not a matter of less computing power, or even lighter power consumption (which is being demanded), but by the physical architecture, the ultra-tight packing, on the processors themselves.

But time will certainly tell, and I'd be happy to accept being wrong on this.
 
Back
Top