Video and audio basic knowledge collection

1. NTSC color television system: It is a color television broadcasting standard designated by the National Television Standards Committee of the United States in 1952. It adopts the technology of orthogonal balanced amplitude modulation, so it is also called orthogonal balanced amplitude modulation system. Most countries in the Western Hemisphere, such as the United States and Canada, and Taiwan, Japan, South Korea, and the Philippines in China all adopt this system.
2. PAL system: It is a color television broadcasting standard specified by West Germany in 1962. It adopts the technology method of line-by-line inversion quadrature balanced amplitude modulation, which overcomes the shortcomings of color distortion caused by NTSC system phase sensitivity. Some Western European countries such as West Germany and Britain, Singapore, China and Hong Kong, Australia and New Zealand adopt this system. According to different parameter details in the PAL system, it can be further divided into G, I, D and other systems, of which the PAL-D system is the system adopted in mainland China.
3. SECAM system: SECAM is an abbreviation for French, meaning sequential transmission of color signals and storage and restoration of color signal systems. It is a new color TV system proposed by France in 1956 and formulated in 1966. It also overcomes the shortcomings of NTSC phase distortion, but uses the time separation method to transmit two color difference signals. Countries using the SECAM system are mainly concentrated in France, Eastern Europe and the Middle East.
In order to receive and process TV signals of different systems, TV receivers and video recorders of different systems have been developed.
1. High-frequency or radio frequency signals In order to be able to propagate TV signals in the air, the video full TV signal must be modulated into high-frequency or radio frequency (RF-Radio Frequency) signals, each signal occupying a channel, so that multiple channels can be simultaneously propagated in the air TV shows without causing confusion. China's sampling PAL system, each channel occupies a bandwidth of 8MHz; the United States uses NTSC system, the TV channel from 2 to 69 channels, the bandwidth of each channel is 4MHz, the TV signal band occupies 54 MHz to 806 MHz channels. CATV (Cable Television) works in a similar way, except that it transmits TV signals through cables rather than through the air.
After receiving a high-frequency signal from a certain channel, the TV must demodulate the full TV signal from the high-frequency signal to reproduce the video image on the screen.
Second, composite video signal Composite video (Composite Video) signal is defined as a single-channel analog signal including brightness and chrominance, that is, the video signal after separating the sound from the full TV signal, the chrominance signal is still interleaved at this time The high end of the brightness signal. Because the brightness and chrominance of the composite video are interleaved, it is difficult to restore completely consistent colors during signal playback. This kind of signal can generally be input or output to a home video recorder through a cable, and its signal bandwidth is relatively narrow, generally only having a resolution rate of about 240 horizontal lines. Earlier TV sets had only antenna input ports, and newer TV sets were equipped with composite video input and output terminals (Video In, Video Out), that is, they could directly input and output demodulated video signals. The video signal does not contain high-frequency components, and the processing is relatively simple, so the video card of the computer generally uses the video input terminal to obtain the video signal. Since the audio signal is no longer included in the video signal, the audio input and output ports (Audio-In, Audio-Out) are generally matched with the video input and output ports, so that the audio can be transmitted synchronously. Therefore, sometimes the composite video interface is also called AV (Audio Video) port.
3. S-Video signal Currently, some TVs are also equipped with a two-component video input port (S-Video In). S-Video is a two-component video signal that separates the luminance and chrominance signals into two independent channels. The analog signal is transmitted separately with two wires and can be recorded on the two tracks of the analog tape. This kind of signal not only has a wide bandwidth for both brightness and chroma, but because the brightness and chroma are transmitted separately, it can reduce their mutual interference, and the horizontal resolution rate can reach 420 lines. Compared with composite video signals, S-Video can reproduce colors better.
Two-component video can come from high-end cameras, which use two-component video to record and transmit video signals. The output of other high-end video recorders, laser video disk LD machines can also be in the format of component video, and the resolution is much higher than that of TV programs obtained from home video recorders.
TVs of different standards can only receive and process TV signals of their corresponding standards. Of course, multi-system or full-system TVs have also been developed, which provides great convenience for processing and converting TV signals of different systems. Full-system TVs can be used in various countries and regions, while multi-system TVs are generally produced in countries with a specified range. Such as Panasonic TC-2188M multi-standard TV, suitable for PAL-D, I system and NTSC (3.58) system, that is, it can be used in mainland China (PAL-D), Hong Kong (PAL-I) and Japan (NTSC 3.58) use.
The SMPTE representation unit of a video sequence usually uses a time code to identify and record each frame in the video data stream, from the start frame to the end frame of a video, each frame in between has a unique time code address. According to the time code standard used by Society of MoTIon Picture and Television Engineers (SMPTE), the format is: hours: minutes: seconds: frames, or hours: minutes: seconds: frames. A video clip with a length of 00: 02: 31: 15 has a playback time of 2 minutes, 31 seconds and 15 frames. If it is played at a rate of 30 frames per second, the playback time is 2 minutes and 31.5 seconds.
According to the different frame rates used in the film, video and television industries, each has its corresponding SMPTE standard. Due to technical reasons, the actual frame rate of the NTSC system is 29.97fps instead of 30fps, so there is a 0.1% error between the time code and the actual playback time. In order to solve this error problem, a drop-frame format is designed, that is, 2 frames per minute are lost during playback (actually two frames are not displayed instead of deleted from the file), so that the time code can be guaranteed Consistent with the actual playing time. Corresponding to the dropped frame format is the nondrop-frame format, which ignores the error between the time code and the actual playback frame.
The basic concept of video compression coding The goal of video compression is to reduce the video data rate under the premise of ensuring the visual effect as much as possible. Video compression ratio generally refers to the ratio of the amount of data after compression to the amount of data before compression. Since the video is a continuous still image, its compression encoding algorithm and the static image compression encoding algorithm have some things in common, but the moving video also has its own characteristics, so the motion characteristics should be considered when compressing to achieve Highly compressed target. The following basic concepts are often used in video compression:
1. Lossy and lossless compression: The concepts of lossy (Lossy) and lossless (Lossless) in video compression are basically similar to those in static images. Lossless compression means that the data before compression and after decompression are exactly the same. Most lossless compression uses the RLE run-length encoding algorithm. Lossy compression means that the decompressed data is inconsistent with the data before compression. In the process of compression, some images or audio information that are insensitive to human eyes and ears must be lost, and the lost information cannot be recovered. Almost all high-compression algorithms use lossy compression in order to achieve the goal of low data rates. The lost data rate is related to the compression ratio. The smaller the compression ratio, the more data is lost, and the worse the effect after decompression. In addition, some lossy compression algorithms use multiple repeated compression methods, which will also cause additional data loss.
Second, intra-frame and inter-frame compression: Intraframe compression is also called spatial compression (SpaTIal compression). When compressing a frame of image, only the data of this frame is considered without considering the redundant information between adjacent frames, which is actually similar to the static image compression. Lossy compression algorithms are generally used within frames. Since there is no relationship between frames during intra-frame compression, the compressed video data can still be edited in units of frames. Intra-frame compression generally does not achieve very high compression.
The use of interframe compression is based on the fact that many videos or animations have a great correlation between the two consecutive frames, or that the two frames have little change in information. That is, continuous video has redundant information between adjacent frames. According to this characteristic, compressing the redundancy between adjacent frames can further increase the amount of compression and reduce the compression ratio. Inter-frame compression, also known as temporal compression, is performed by comparing data between different frames on the time axis. Inter-frame compression is generally lossless. The frame differencing algorithm is a typical time compression method. It compares the difference between the current frame and the adjacent frame, and only records the difference between the current frame and the adjacent frame, which can greatly reduce the amount of data.
3. Symmetric and asymmetric coding: Symmetry (symmetric) is a key feature of compression coding. Symmetry means that compression and decompression occupy the same computing processing power and time. Symmetric algorithms are suitable for real-time compression and transmission of video. For video conferencing applications, it is better to use symmetric compression encoding algorithms. In electronic publishing and other multimedia applications, the video is generally pre-compressed and then played back, so asymmetric encoding can be used. Asymmetry or asymmetry means that it takes a lot of processing power and time to compress, and it can play back in real time when decompressing, that is, compressing and decompressing at different speeds. Generally speaking, it takes much more time to compress a video than to play back (decompress) the video. For example, it may take more than 10 minutes to compress a three-minute video clip, and the real-time playback time of the clip is only three minutes.
MPEG (Moving Picture Experts Group) is an expert group established in 1988. This group of experts formulated an MPEG-1 international standard in 1991. Its standard name is "Motion Picture and Sound Coding-for digital storage media with a rate of less than about 1.5 megabits per second (Coding of moving picture and associated audio --For digital storage media at up to about 1.5Mbit / s) ”. Digital storage media here refers to general digital storage devices such as CD-ROMs, hard disks, and rewritable optical discs. The maximum compression of MPEG can be up to about 1: 200, and its goal is to compress the current broadcast video signal to be able to be recorded on a CD disc and can be played with a single-speed optical disc drive, and has VHS display quality and high-fidelity stereo Sound effects. The coding algorithm adopted by MPEG is called MPEG algorithm for short, and the data compressed by this algorithm is called MPEG data, and the file generated by this data is called MPEG file, which uses MPG as the file suffix.
3.1 MPEG digital video format
MPEG uses lossy and asymmetric compression encoding algorithms. The MPEG standard specifies the compression and decompression methods of video images, and the synchronization of images and sound required to play MPEG data. The MPEG standard includes three parts: MPEG video (Video), MPEG audio (Audio) and MPEG system (System).
1. MPEG video: MPEG video is the core of the standard. MPEG-1 is a standard formulated to effectively access video images on digital storage media such as CD-ROM. The data transmission rate of the CD-ROM drive will not be lower than 150KB / s = 1.2Mb / s (single speed), and the capacity will not be lower than 650MB. The MPEG-1 algorithm was developed for this rate. The window size of MPEG-1 is one-half of the resolution defined by CCIR 601, and can reach a frame rate of 30fps or 25fps. It uses a variety of compression algorithms, and the compressed data rate is 1.2-3MB / s. Therefore, digital video images stored on the optical disc can be played in real time.
2. MPEG audio: The MPEG-1 standard supports highly compressed audio data streams with a sampling rate of 44, 22, and 11KHz, and 16-bit quantization. The sound quality after restoration is close to the original sound quality, such as the sound quality of CD-DA. The audio data rate of CA-DA sound quality is about 10 megabytes per minute (10MB / min), which is equivalent to about 1.4 megabits per second (1.4Mb / s), which is the entire bandwidth of a single-speed CD-ROM! The use of MPEG-1 audio compression algorithm can reduce the mono bit rate to 0.192Mb / s or even lower, and the sound quality has no significant decline. MPEG-1 supports two channels, which can be set to mono, dual, stereo, etc.
At present, MP3 audio files widely used on the Internet use the MPEG-3 audio technology to achieve a compression ratio of 1:10 or even 1:12, and the distortion is very small.
3. MPEG system: This part is related to synchronization and multi-channel composite technology, which is used to combine digital TV images and audio into a single data bit stream with a bit rate of 1.5Mb / s. The data bit stream of MPEG is divided into inner and outer layers, the outer layer is the system layer, and the inner layer is the compression layer. The system layer provides the functions necessary to use the MPEG data bitstream in a system, including timing, composite and separation of video images and audio, and synchronization of images and audio during playback. The compression layer contains compressed video and audio data bit streams.
Among various video compression algorithms, MPEG is the best algorithm that can provide low data rate and high quality. MPEG-1 has been adopted by the majority of users, such as the release of VCD or small video discs. Its playback quality can reach the level of home video recorders. Using different encoding parameters, the quality of the obtained MPEG-1 data is also different. At the same time, the MPEG expert group formulated the MPEG-2 standard in 1993, which is the standard adopted by DVD.
MPEG-1 data playback Because MPEG uses an asymmetric compression algorithm, it is time-consuming to use software to perform MPEG compression encoding on a PC. Even encoding several video clips can take hours. Therefore, a special MPEG encoding card is generally used to implement MPEG compression encoding with hardware. To play back compressed MPEG data, you must first decode it, and then send the decompressed large amount of digital video data to the display buffer for screen display. Therefore, there are two main factors that affect the playback effect: one is the decoding rate, and the other is the display rate. The speed of decoding is much faster than the speed of encoding, so software decoding and hardware decoding can be used in two ways based on different MPC hardware.
1. MPEG-1 software decoding: Software decoding means to read MPEG compressed data by means of software algorithm, decompress it and send the decompressed large amount of digital video data to the display buffer for screen display. So MPEG decompression software is also called MPEG playback software. The advantage of using software decoding is that it does not need the support of additional hardware, and it can play MPEG digital video on the MPC machine, which is convenient to use; its disadvantage is that the speed of decoding and the quality of the decoded video depend entirely on the processing capabilities of MPC.
If the processing speed and display speed of MPC are not fast enough, there may be insufficient frame rate, unsynchronized image and sound, or "mosaic" of the image when the software decodes and plays MPEG data (the image is blocky) Therefore, under certain hardware conditions, using the system resources of MPC as much as possible is the key to achieving better playback results.
Second, MPEG-1 hard decompression card: MPEG hard decompression card (referred to as decompression card) is a hardware device dedicated to the decompression and playback of MPEG data, the core of the decompression card is a decompression chip. The advantage of using hardware decompression is that the rate of decompression and playback is not affected by the rate of the MPC host, full-screen real-time playback is achieved, and the stability and color effect are also better when playing VCD. But its disadvantage is that it requires additional hardware equipment, and its installation and debugging is also more troublesome. Therefore, hardware decompression cards are generally used in MPCs where the processing speed is not high enough.
The decompression card needs to be inserted into the expansion slot of the MPC host, connect the port to the corresponding port of the MPC, set the system parameters, and use the playback software that comes with the decompression card to perform MPEG-1 playback.
Although MPEG-1 has the characteristics of standardization, high compression, and good video quality, the MPEG files it generates require special decompression software or hardware to play back. The playback effect of the decompression software depends on the processing capacity of the system, and the decompression hardware needs The extra equipment is not conducive to the user's application in the software developed by him. In addition, in order to obtain high compression, MPEG uses an inter-frame compression algorithm. Since each frame stores only the difference from the previous frame information during inter-frame compression, it is very difficult to edit the frame. MPEG files can only be played back with decompression software or hardware after decompression, but cannot be edited with most video editing software. Therefore, in addition to MPEG digital video, AVI digital video is currently more popular
AVI digital video format
AVI (Audio Video Interleave) is a digital video file format for audio and video interleaved recording. In early 1992, Microsoft introduced the AVI technology and its application software VFW (Video for Windows). In AVI files, moving images and audio data are stored in an interleaved manner and are independent of hardware devices. This way of organizing audio and video data in an alternating manner makes it possible to obtain continuous information from the storage medium more efficiently when reading video data streams. The main parameters that constitute an AVI file include video parameters, sound parameters, and compression parameters:
1. Video parameters
1. Window size (Video size): According to different application requirements, the window size or resolution of AVI can be adjusted according to the ratio of 4: 3 or arbitrarily: as large as 640 × 480 for full screen, as small as 160 × 120 or even lower. The larger the window, the greater the amount of data in the video file.
2. Frame rate (Frames per second): The frame rate can also be adjusted and is proportional to the amount of data. Different frame rates will produce different continuous picture effects.
Second, the sound parameters: In the AVI file, the video and sound are stored separately, so you can combine the video in one video with the sound in another video. AVI files are closely related to WAV files because WAV files are the source of sound signals in AVI files. The basic parameters of the sound are also the parameters of the WAV file format. In addition, the AVI file also includes other parameters related to audio:
1. Interlace Audio Every X Frames (Interlace Audio Every X Frames) The audio signal that is interleaved and stored every X frames in the AVI format, that is, the frequency of alternating sound and video X is an adjustable parameter and the minimum value of X is one frame , That is, each video frame is interleaved with audio data, which is the default value used on CD-ROM. The smaller the interleaving parameter, the less the data stream read into the memory when playing back the AVI file, and the easier the continuous playback. Therefore, if the data transfer rate of the AVI file storage platform is large, the interleaving parameter can be set higher. When the AVI file is stored on the hard disk, that is, when the AVI file is read from the hard disk for playback, you can use a larger interleaving frequency, such as a few frames, or even 1 second.
2. Synchronization control (SynchronizaTIon)
In AVI files, video and sound are synchronized very well. However, when playing back AVI files in MPC, there may be a phenomenon that the video and sound are not synchronized.
3. Compression parameters: You can use the uncompressed method when collecting the original analog video, so that you can get the best image quality. After editing, select appropriate compression parameters according to the application environment.
1. Provide hardware-free video playback function: Although the AVI format and VFW software are designed for the current MPC, it can also be continuously improved to adapt to the development of MPC. According to the parameters of the AVI format, the size and frame rate of its window can be adjusted according to the hardware capabilities and processing speed of the playback environment. When playing on low-end MPC machines or on the Internet, the VFW window can be very small, the number of colors and frame rate can be very low; and on the PenTIum-level system, for 64K colors, 320 × 240 compressed video data can achieve 25 per second Frame playback rate. In this way, VFW can be applied to different hardware platforms, allowing users to edit and replay digital video information on ordinary MPC without the need for expensive specialized hardware equipment.
2. Realize synchronous control and real-time playback: through synchronous control parameters, AVI can adapt to the playback environment through self-adjustment. If the processing capacity of MPC is not high enough, and the data rate of the AVI file is relatively large, play the AVI under WINDOWS When file, the player can adjust the actual playback data rate of AVI by dropping some frames to achieve the effect of video and audio synchronization. (mpc = Multimedia PC multimedia computer)
Third, can efficiently play AVI files stored on the hard disk and disc: due to the cross storage of AVI data, VFW only needs to occupy limited memory space when playing AVI data, because the playback program can read the video data on the hard disk or disc at the same time Play while playing, without having to load large amounts of video data into memory in advance. When playing AVI video data, you only need to access a small amount of video images and some audio data within a specified time. This method can not only improve the work efficiency of the system, but also can quickly load and start the playback program quickly, reducing the user's waiting time when playing AVI video data.
4. Provide an open AVI digital video file structure: AVI file structure not only solves the problem of audio and video synchronization, but also has the characteristics of universality and openness. It can work in any Windows environment, and also has the function of extending the environment. Users can develop their own AVI video files, which can be called at any time in the Windows environment.

The second part: the use of the TV card. I have seen the screenshot of the TV card posted by someone on the Internet outside the TV card. It is very good to download from the Internet to see the sample of video collection made by others. I am very satisfied. ! I took the medicine as usual, bought the same TV card, installed the hardware and software, opened the TV program, and I was dumbfounded. Why is the effect so bad? There are also such as incomplete search, no picture or no picture, unclear image, serious interference stripes, remote control failure, and even black screen dead screen system restart ...

Ha ha, not to scare people, these are the reflections of the problems in the TV card application that are often seen on the forum. It will not be unfortunate that these problems are all encountered, but it is enough to encounter one or two of the above phenomena!

The TV card is not a household appliance, but a PCI device with specific functions in the computer system. Only when it has the soft and hard environment required by the TV card can it provide various functions normally, and the performance effects of each function of the TV card The overall performance of the computer system and the performance of the main related equipment (such as graphics cards) are closely related, so it is not a strange thing that the same TV card gives different effects in different computers. This is also a very challenging thing to use a TV card (I like to talk about playing TV cards). One of the basic characteristics of science is the repeatability under the same conditions. The same TV card can be compared with others. You want good results here, don't be frustrated, but you should see hope in it. As long as you can create the same conditions, the same TV card has no reason not to give the same effect. Maybe you can still try to find a soft and hard environment that is more suitable for the functional performance of the TV card, then it should not be strange that the same TV card shows better results than others.

I have n’t seen any adjustments to the TV card hardware. The driver update and the application of the new version of the TV program are not complicated. However, the internal conditions of the TV card application (system soft and hard environment) and external conditions (signal input quality) It ’s not that simple. This is what I want to explain. With a TV card, you have to work outside the TV card.

Second, the signal quality

The function of the TV card is to convert analog video input signals (TV RF signal, S-video signal, av composite video signal) to analog-to-digital conversion, video decoding, processing, and then provide the converted digital video stream to the graphics card through the PCI bus Display or send to the designated compression engine interface (address) for other processing (recording, collection) according to the system instructions. All these processes are carried out on the basis of the quality of the input signal. The effect of the process is first determined by the quality of the input signal. No video processing equipment can improve the signal quality of a video source. Even a high-end TV set will not give a high-quality picture without a good enough TV signal input, as will a TV card. Regardless of the quality of the input signal, the idea that a good TV card should have good ratings is a conceptual error. I personally think that the so-called good video conversion processing equipment is mainly manifested in the fidelity of video and audio reproduction and anti-interference performance. As far as the TV card is concerned, it is of course the quality of the TV card itself that good signal quality does not produce good results, but no matter how good the TV card is, it is impossible to convert a bad signal input into a high-quality video image. Therefore, in order to get the best possible effect when playing TV cards, we must first find ways to get the best possible input signal quality.

So how can we achieve the best possible input signal quality? It's actually very simple to say, analyze the main factors that will affect the signal transmission quality and know where to start. I think there are two main factors, one is signal attenuation during signal transmission, and the other is clutter interference.

The quality of the cable TV signal to the household should reach a certain technical index. If the strength of the cable TV signal to the household is not enough, only the technical staff of the cable provider can be found for testing and debugging. The only thing that can be improved by the individual is from the cable TV signal. The line from the user interface to the TV card antenna interface. The natural attenuation of the signal during transmission is related to the signal line material (copper, aluminum), cross-sectional area (thickness) and distance. Simply put, the attenuation of the copper core is smaller than that of the aluminum core, and the attenuation of the thicker than thinner is smaller, the longer the distance The greater the attenuation. In addition, the application of the signal distributor also has a signal attenuation effect. Generally, the one-two, three-point distributor will have a signal attenuation of 3-6db (decibels), and the one-fourth distributor will have a signal attenuation of more than 8db. . I have this experience. When the old house was renovated many years ago, it was re-wired and divided into 3 cable TV signal interfaces (exited on the wall interface panel). The cable signal line for access to the door outside the door used a 3 distributor. The overall signal quality is also Not bad, but I always hope that the TV signal quality will be better when playing TV cards. Because the other two signals are useless, I disconnected the outdoor 3 distributor, and the signal line of the TV card is directly hinged to the wired home signal line (copper core is directly hinged, insulation tape is wound, and the shielded network cable is then connected with aluminum foil tape) As a result, the signal strength is increased by about 6db, and the picture effect of the TV card is immediately improved! Encouraged by the effect of direct articulation afterwards, I also suspected that the quality of the TV signal interface panel on the wall was not good enough, and the circuit board of the interface panel was removed, and it was directly hinged again to form a cable TV signal from the outdoor to the TV card. Oh, the effect is really good ! This example is given to show that the difference in signal strength of a few decibels will have a significant effect on the effect of the TV card. The wiring like me is not very beautiful. Choose a coaxial cable with a relatively thick copper core, try to reduce the intermediate links, and do not use too long signal lines. These can reduce the attenuation of the signal during transmission.

Another factor is clutter interference. The anti-interference performance of the TV card seems to be inferior to that of the TV set, but I think the bigger possibility is that the computer itself is a strong source of interference! Generally, there are many computer connections, such as power cables, data cables, network cables, etc., which are much more complicated than TV application environments. My basic view is that the interference phenomenon in the TV card viewing image is mainly caused by various clutter interference caused by the computer on the signal line connected to the TV card, which is brought into the TV card by the signal line. Therefore, the use of high-quality shielding signal lines and connectors is very effective in eliminating the interference of TV cards. I used a coaxial cable with a copper core and a single-layer copper wire mesh shield. It is said to be a 75-mesh specification. The effect is good. I bought it at a regular large hardware store. It costs 3 yuan and 1 meter. There are also many TV card netizens who use double-shielded signal cables for cable TV. It is said that the effect is also very good. Try to use better quality connectors. Sometimes this is more of a psychological effect. I used a so-called gold-plated connector. It costs 4 yuan each. It does feel better, but it ’s not. The psychological effect is not clear, but at least the possibility that the joint is not good enough and the effect is not good can be ruled out.

Inspired by netizens, I wrapped the last TV signal cable connected to the TV card with aluminum foil tape. I think it can enhance the shielding effect on the strong interference environment near the computer host. It feels effective. Although it is not obvious, it has no adverse effects. . Some netizens use a TV signal cable with a magnetic ring or add a magnetic ring near the signal line connector, saying that it can eliminate interference. Some netizens tried to say that it was invalid. I did not try it because the signal interference problem here has been solved. Well, if I can't solve it, I will definitely try.

Let's talk about TV signal amplifiers again. The TV card seems to have higher requirements for the strength of the input TV signal than the TV. When playing the TV card at the beginning, it was annoying for a very thin vertical interference. See some netizens that adding a TV signal amplifier can eliminate this interference phenomenon. I added the signal amplifier I bought for my TV in my early years and it worked! Since then, I have tried four or five signal amplifiers, and found that the quality of the amplifier itself is also very different. It can be said that no satisfactory ones have been found. During this process, I saw the screenshots and video collection samples made by netizens who used the same TV card on the Internet. The results are very good, and they are better than those here. He did not use an amplifier. In the process of continuously disassembling the amplifier, I also found that sometimes the interference phenomenon was due to the failure of the connector. I realized that the signal interference phenomenon should be mainly due to the shielding of the TV signal line. So I used an all-metal plug and no amplifier. Reduce the intermediate transfer link, ha ha, the interference phenomenon is completely resolved, the video effect is better than when using the amplifier, especially the color quality. Later, comparing the screenshot effects of netizens using amplifiers, I always felt that amplifiers will cause a loss of color quality, so my opinion is that TV signal amplifiers should be used mainly when the signal strength is obviously insufficient, because the clarity of viewing is the first However, if you want to pursue the best color effect, you can use the amplifier as much as possible. Perhaps you should first find a cable TV provider to solve the problem of signal strength at the home. By the way, the video effect of the TV card will be bad if the TV signal is too strong.

Although I feel that the TV card's requirements for the quality of the TV signal seem to be higher than that of the TV, compared with the effects given by the TV and the TV card at home, I think that the TV signal that can give good results on the TV is enough for the TV card It is just that the strong interference environment near the computer makes the signal cables and connectors used for TV cards have a higher quality than those used for TV sets.
1. Selection of connecting cable

For most urban users, we use a cable TV system. A 75-ohm coaxial cable is used to connect the TV card to watch TV. The unstable or coarse grain of the TV picture is often caused by the quality problem of the connecting cable. Various coaxial cables for TVs are available on the market, and the ratings of products of different brands and price points vary greatly. Miscellaneous brand wiring generally uses poor materials, and even if it can receive TV images, it will not be clear. Therefore, it is best to buy TV cables using oxygen-free copper (OFC) at regular electrical appliances stores to obtain better signal quality. As for the length, as long as it meets the actual needs, the signal will be attenuated because of too long wiring. As early as 1999, the author spent 180 yuan to buy a HISAGO 1092 OFC COAXIAL CABLE coaxial cable, and the effect is now first-class on a TV card.

2. Add a signal amplifier to enhance the signal

If the TV picture reception is still unclear after working hard on wiring, then you need to configure a signal amplifier. Because if the TV signal itself is weak, using high-quality wiring will not help. The price of the amplifier ranges from more than ten yuan to hundreds of yuan. In addition to amplifying the TV signal, the more expensive amplifier also provides more functions such as TV interface and noise elimination. An amplifier less than one hundred yuan used by the author at home has a good effect.

3. Reduce internal interference

When watching TV with a TV card, there are many parts in the computer case, and each part will interfere with the TV receiving function of the TV card during operation, especially the cooling fan that rotates at high speed for a long time. If the quality of the TV card itself is not good, it is easily affected by the internal interference of the computer and the image quality is degraded. So what we have to do is to minimize the interference near the TV card. First, avoid fans near the TV card, such as the high-speed cooling fan on the new-generation graphics card. If conditions permit, the TV card should be installed in the PCI slot farther away from the processor and graphics card. Fortunately, most of the newly launched TV cards have done a good job of dealing with the internal interference of the computer.


As a novel hardware device, TV card can not only receive and record TV programs, but also capture external image signals. It is indeed a very practical product. If it is used with free software such as Dscaler and Radiator, it can more fully realize the potential of TV cards. For example, with the help of Dscaler to "multiply frequency" the TV picture, not only can you obtain high-quality progressive images on a computer monitor with a small dot pitch and high resolution, you can even output the processed video signal In large-screen TVs that support VGA interface input, enjoy larger and better picture effects. Do you know the difference between Toshiba and Sony's large-screen color TVs with progressive scanning and without progressive scanning? Three or four thousand yuan!那你还不快快动手买一块电视卡,再下载Dscale和Raditor,享受免费的逐行扫描+不闪烁(刷新频率85-120Hz)电脑电视和收音录音机吧!

对于拥有电视卡的朋友而言,如果花了500、600百元的代价,只是可以看看电视的话,心里总归是不会满足的。 why?因为现在的彩电实在是太便宜了。花上700元,你可以买一台非常不错的21英寸彩电。那看起来,怎么着也比电脑爽。所以我们必须挖掘电视卡的潜力,让它物超所值。
录像,也就立刻走入了我们的视线。现在的硬盘都非常大。如果我没时间观看,那可不可以把它录下来,以后再看呢?在奔腾时代,这个愿望是可望而不可及的。因为CPU实在无法胜任。如果购买带硬件压缩的视频捕捉卡,价格又太高了。而现在,您只需拥有赛扬600以上,就完全可以用软件来进行实时录像和压缩。当然,内存要大点。在Win2000环境下,最好在256M以上。下图是各种档次CPU在不同帧数下面所需的CPU占用率。我们知道,只有在25帧/秒以上,肉眼才会不感到闪烁。
在具体操作之前,让我们先了解一下各种视频格式:
1. AVI: AVI 是Audio Video Interleave 的缩写,这个看来也不用我多解释了,这个微软由WIN3.1 时代就发表的旧视频格式已经为我们服务了好几个年头了。这个东西的好处嘛,无非是兼容好、调用方便、图象质量好,但缺点我想也是人所共知的:尺寸大!就是因为这点,我们现在才可以看到由MPEG1 的诞生到现在MPEG4 的出台。现在有些电视卡标榜带硬件VCR(l录像)功能,其实也就是捕捉成AVI格式。毫无实际使用价值。
2.MPEG:MPEG 是Motion Picture Experts Group 的缩写,它包括了MPEG-1, MPEG-2 和MPEG-4 (注意,没有MPEG-3,大家熟悉的MP3 只是MPEG Layeur 3)。MPEG-1相信是大家接触得最多的了,因为它被广泛的应用在VCD 的制作和一些视频片段下载的网络应用上面,可以说99% 的VCD 都是用MPEG1 格式压缩的。MPEG-2 则是应用在DVD 的制作(压缩)方面,同时在一些HDTV(高清晰电视广播)和一些高要求视频编辑、处理上面也有相当的应用面。使用MPEG-2 的压缩算法压缩一部120 分钟长的电影(未视频文件)可以到压缩到4 到8 GB 的大小(当然,其图象质量等性能方面的指标MPEG-1 是没得比的)。MPEG-4 是一种新的压缩算法,使用这种算法的ASF 格式可以把一部120 分钟长的电影(未视频文件)压缩到300M 左右的视频流,可供在网上观看。其它的DIVX 格式也可以压缩到600M 左右,但其图象质量比ASF 要好很多。
3. DivX: DIVX 视频编码技术由Microsoft mpeg4v3 修改而来,使用MPEG4 压缩算法。同时它也可以说是为了打破ASF 的种种协定而发展出来的。而使用这种据说是美国禁止出口的编码技术.MPEG4 压缩一部DVD 只需要2 张CDROM!这样就意味着,你不需要买DVD ROM 也可以得到和它差不多的视频质量了,而这一切只需要你有CDROM 哦!况且播放这种编码,对机器的要求也不高,CPU 只要是300MHZ 以上,再配上64兆的内存和一个8兆显存的显卡就可以流畅的播放了。

电视卡视频质量相关影响的几个因素(排名不分先后):
1、芯片;
2、驱动;
3、高频头;
4、信号质量;
5、应用程序;
6、机器配置(尤其是显示卡);
7、操作系统(含DX);
8、电路设计和使用材料;
不能孤立的说某个方面一定行和不行。使用DSCALER或者FLY2000的效果大家都看到了,说明同样的东西在不同的条件下效果相差很多。
所以,电视卡才有的玩,不是声卡那么简单。

>>>从理性的角度看:
一、硬件平台:
主流的芯片有878、713X、2388X,还有一些其他的芯片。
每个产品的硬件设计原理基本一样。所以硬件的优劣就体现在核心芯片的不同。
这里的核心应该包括解码芯片和高频头。剩下的就是一些敷料的品质不同。还有PCB的布线。
在这方面,一些知名的品牌做得比较好。这里考察的是厂家的采购能力和生产的管理能力。

二、驱动程序:
驱动程序对解码和传输,还有显示都有影响。国内有能力写驱动的厂家没有几个。
那些杂牌的基本上是直接COPY过来就用。所以在考察时,应该把是否有能力写驱动考虑在内。
经常会发现不同的产品都有同一个毛病。改好的时候,大家都好的。
因为有能力的话,可以更新驱动来克服暂时的BUG。

三、应用程序:
有能力写应用程序的国内厂家比驱动多那么几个。
但当中的核心比如显示优化和压缩引擎无不都用了国外著名公司的SDK包。这些是制约因素。
他们所做是换一个出现的面目,当然这些都很重要。软件的易用性和稳定性。
所以这也是需要考虑的开发能力。有能力的,能经常更新应用程序来提高质量。
而一些通用的程序,只能用随盘带的版本,不可能还能通过合法途径免费获取更好版本。
有时候出现显卡的兼容性问题都是跟电视卡的驱动程序和显示优化部分直接相关。

>>>从感性的角度看:
也就是我们用户能体验到的东西。
这里包括:硬件和软件(包括驱动)安装的简易性。
操作界面的亲和性,还有简单的和花俏的,都需要考虑。录像和遥控都是必须有的。
当然最最重要的是用户能看到什么样的画质,和由此产生的愉悦感。
除了这里的发烧友们,很少有用户自己去折腾驱动和应用程序。
所以随盘带的驱动和软件应该作为比照的基本。
较长的质保期,碰到问题能在厂家的网站上找到答案或者能电话支持,甚至上门来服务。
能Down新的驱动和应用程序等等。
这些考虑能过滤掉很多的杂牌产品。
这里需要指出的是即使是普通的电视卡也有成熟度之分。
一般比较成熟的产品(可以从市场上出现的时间来判断),用户选择便宜一点就行了。
对于新出现的产品,用户一定要选择有能力的品牌。因为新产品或多或少会出点问题的。

Traffic Facilities

Traffic Facilities,Waterproof Traffic Facilities,Outdoor Traffic Facilities,Traffic Control Devices

Yangzhou Heli Photoelectric Co., Ltd. , https://www.heli-eee.com

This entry was posted in on