Originally Posted by
xpian
@Carnifex - The author of that web page has several levels of recommendation for PPI to LPI ratios. There is a section for greyscale and a section for color, and his recommendations are slightly different. He has *minimum*, *ideal*, and *maximum* categories. Having looked over the page in question, it seems to be in line with both my experience as a printer and what I've read from many other sources. I'll just refer people to the link, rather than repeating here what that (excellent) source says.
The above discussion is regarding traditional "4-color Process" halftoning, such as is found in newspapers and most magazines. One of my points, which we have not really been discussing, is that many people printing things these days are doing it using desktop inkjet printers and that these printers use an entirely different form of halftoning. In stochastic screening as with inkjets, the dots are much smaller and sprayed in a semi-random pattern where density of tone is achieved by clustering dots tightly rather than varying the size of the dots. I'm sure many people already know this, but the upshot of this difference is that the PPI resolution recommendations are different. If I look at the inkjet printer on my desktop, it says it has a "1200 DPI" resolution. I wouldn't want people to think they need to send files to it that are 1200, 1800, or 2400 pixels per inch. That would be silly. When printing with stochastic screens, especially when concerning smooth colors as in photographs or paintings, the PPI of your file can be much lower than the DPI of your printer.
When it comes to your discussion of printing line art TIFFs on a 2400 DPI CTP printer...I think that the argument is highly technical and that both sides are, perhaps, correct in their own way. I can't really speak for Redrobes, but I think we're saying something very similar: every output device has its own, built-in internal raster. The CTP printer you mention has a 2400 DPI raster. Every file sent to it, whether that original file is raster or vector, gets converted to the machine's internal raster at the time of output. How could it be any other way? Since we're talking about line art, we're only discussing black and white pixels. As long as the file's resolution is less than or equal to the printer's DPI, in this case, there should be no loss of quality. There will be no "re-sampling" evident in the image when it's printed out. It will appear to be exactly the same, pixel-for-pixel, as the file that was sent. I think maybe this is what you mean by "A line art tiff image is not rasterized but printed in full resolution in an offset printer"--there's no evidence of upsampling, downsampling, or any other image degradation so long as the output device has super-high resolution.
"A good example of a thing not rasterized is text and vector graphics." -- I think there's some confusion here with what is being meant by both sides. Redrobes (I assume) and I are not saying that the text and vectors are being rasterized at the computer before being printed. Not in the same way you might pick a text layer in PS and choose the "rasterize type" command, no. (This can happen, of course, if the designer has a need to do things to the text in PS) Most of the time with fonts and vectors, whether they be in an Illustrator file, a PDF, a Word doc or whatever--most of the time, this info is sent to the printer as math. Often in the PostScript language. But, before the printer can print it out, the printer itself *must* rasterize the vector data to its own internal, machine raster defined by its hardware. And this is often so finely detailed today that you'd never be able to see the individual dots or pixels, even with a magnifying glass (if we're talking about line art, still). The only way this is not true is when talking about true vector printers, such as architectural plotters that use pens and roll the paper back and forth.