Consider the simplified process:
1. Player A presses the move forward button and their client sends this message to the server
2. The server receives the message, processes it and sends it to Player B
3. Player B's client receives the message and renders Player A's movement on their screen
Culling affects step 2. Imagine that there are 50 players who all move into Player B's view at once; to avoid a bottleneck, culling could mean that the messages that those players are there are sent to Player B's client in clumps rather than all at the same time. The net effect is that Player B will see only a portion of those players at first and then more later.
This is vast a simplification because other factors come into play (how many players are there, how many are already on Player B's screen, are they friendly or enemy, do any of them go invisible, etc.).
Fallback models on the other hand I think affect only step 3. Although culling and fallback models are connected in the same process and together affect the final result, they are different problems.
But I'm not an expert and am happy for corrections if I got something horribly wrong.
Culling is actually a 3D-graphics definition. It's the process of analyzing a scene and deciding what to render and what not to render. What's happening, is your client learns of a visible player, but has no information on hoe that character looks, so ANet originally designed the client to just ignore the player until visual data is received (not supposed to sound bad). It also has a lot to do with what a graphics card can handle.
So, in essence, ANet is now redesigning culling to place low def. place holders so that players can see all the players in the area.