14.07.2020

The main factors affecting the performance of the pc. What determines the speed of a computer? Optimizing visual effects


Storage systems for the vast majority of web projects (and not only) play a key role. Indeed, often the task comes down not only to storing a certain type of content, but also to ensuring its delivery to visitors, as well as processing, which imposes certain performance requirements.

While many other metrics are used in the manufacture of drives to describe and guarantee proper performance, in the storage and disk market, it is common to use IOPS as a comparative metric for the purpose of "convenience" of comparison. However, storage performance, measured in IOPS (Input Output Operations per Second), is subject to many factors.

In this article, I would like to look at these factors to make the IOPS measure of performance more understandable.

Let's start with the fact that IOPS is not IOPS at all, and not even IOPS at all, since there are many variables that determine how much IOPS we get in some cases. You should also take into account that storage systems use read and write functions and provide different IOPS for these functions depending on the architecture and type of application, especially in cases where I / O operations occur at the same time. Different workloads present different requirements to input / output operations (I / O). Thus, storage systems that at first glance should provide adequate performance, in reality, may not cope with the task.

Storage performance fundamentals

In order to gain a full understanding of the issue, let's start with the basics. IOPS, throughput (MB / s or MiB / s) and response time in milliseconds (ms) are generally accepted units for measuring the performance of drives and arrays.

IOPS is usually considered in terms of measuring the ability of a storage device to read / write in 4-8KB chunks in random order. This is typical for online transaction processing tasks, databases and for running various applications.

The concept of drive throughput is usually applicable when reading / writing a large file, for example, in blocks of 64KB or more, sequentially (in 1 stream, 1 file).

Response time - the time it takes for a drive to start performing a write / read operation.

The conversion between IOPS and throughput can be done like this:

IOPS = bandwidth / block size;
Bandwidth = IOPS * block size,

Where block size is the amount of information transferred during one input / output (I / O) operation. Thus, knowing this characteristic hard disk(HDD SATA) as bandwidth - we can easily calculate the number of IOPS.

For example, let's take the standard block size of 4KB and the manufacturer's default sequential write or read (I / O) throughput of 121 MB / s. IOPS = 121 MB / 4KB, as a result, we get a value of about 30,000 IOPS for our SATA hard drive... If the block size is increased and made equal to 8 KB, the value will be about 15,000 IOPS, that is, it will decrease almost in proportion to the increase in the block size. However, you need to clearly understand that here we looked at IOPS in terms of sequential write or read.

Things change dramatically for traditional SATA hard drives if reads and writes are random. Here latency begins to play a role, which is very critical in case hard drives HDDs (Hard Disk Drives) SATA / SAS, and sometimes even in the case of SSD (Solid State Drives). Although the latter often provide orders of magnitude better performance than "spinning" drives due to the absence of moving elements, they can still experience significant write delays due to the technology's peculiarities and, as a consequence, when used in arrays. Dear amarao has done some useful research on the use of solid state drives in arrays, as it turned out, performance will depend on the latency of the slowest drive. For more details on the results, see his article: SSD + raid0 - not everything is so simple.

But back to the performance of individual drives. Consider the case with "spinning" drives. The time required to complete one random I / O operation will be determined by the following components:

T (I / O) = T (A) + T (L) + T (R / W),

Where T (A) is the access time or seek time, also known as the seek time, that is, the time it takes for the read head to be placed on the track with the block of information we need. Often, the manufacturer specifies 3 parameters in the disk specification:

The time it takes to move from the farthest track to the nearest;
- the time required to move between adjacent tracks;
- average access time.

Thus, we come to the magic conclusion that the T (A) index can be improved if we place our data on as close tracks as possible, and all data is located as far as possible from the center of the plate (it takes less time to move the magnetic head assembly, and on the outer tracks, there is more data, since the track is longer and it rotates faster than the inner one). Now it becomes clear why defragmentation can be so useful. Especially with the condition of placing the data on the outer tracks in the first place.

T (L) - the delay caused by the rotation of the disk, that is, the time it takes to read or write a specific sector on our track. It is easy to understand that it will range from 0 to 1 / RPS, where RPS is the number of revolutions per second. For example, with a disc characteristic of 7200 RPM (revolutions per minute), we get 7200/60 = 120 revolutions per second. That is, one revolution occurs in (1/120) * 1000 (number of milliseconds in a second) = 8.33 ms. The average delay in this case will be equal to half the time spent on one revolution - 8.33 / 2 = 4.16 ms.

T (R / W) - the time to read or write a sector, which is determined by the size of the block selected during formatting (from 512 bytes to ... several megabytes, in the case of more capacious drives - from 4 kilobytes, the standard cluster size) and the bandwidth, which specified in the drive specifications.

The average rotation delay, which is approximately equal to the time taken for half a revolution, knowing the rotation speed of 7200, 10,000 or 15,000 RPM, is easy to determine. And above we have already shown how.

The rest of the parameters (average read and write seek time) are more difficult to determine, they are determined as a result of tests and are indicated by the manufacturer.

To calculate the number of random IOPs of a hard disk, it is possible to apply the following formula, provided that the number of simultaneous read and write operations is the same (50% / 50%):

1 / (((average read seek time + average write seek time) / 2) / 1000) + (average rotation delay / 1000)).

Many people wonder why exactly this origin of the formula? IOPS is the number of input or output operations per second. That is why we divide 1 second (1000 milliseconds) in the numerator by the time, taking into account all delays in the denominator (also expressed in seconds or milliseconds), required to perform one input or output operation.

That is, the formula can be written in this way:

1000 (ms) / ((average read seek time (ms) + average write seek time (ms)) / 2) + average rotation delay (ms))

For drives with different RPMs (rotations per minute), we get the following values:

For 7200 RPM IOPS = 1 / (((8.5 + 9.5) / 2) / 1000) + (4.16 / 1000)) = 1 / ((9/1000) +
(4,16/1000)) = 1000/13,16 = 75,98;
For a 10K RPM SAS drive IOPS = 1 / (((3.8 + 4.4) / 2) / 1000) + (2.98 / 1000)) =
1/((4,10/1000) + (2,98/1000)) = 1000/7,08 = 141,24;
For 15K RPM SAS drive IOPS = 1 / (((3.48 + 3.9) / 2) / 1000) + (2.00 / 1000)) =
1/((3,65/1000) + (2/1000)) = 1000/5,65 = 176,99.

Thus, we see dramatic changes, when from tens of thousands of IOPS during sequential read or write, performance drops to several tens of IOPS.

And already, with a standard sector size of 4KB, and the presence of such a small number of IOPS, we will get the bandwidth value by no means a hundred megabytes, but less than one megabyte.

These examples also illustrate the reason for the minor changes in nominal disk IOPS from different manufacturers for drives with the same RPM.

Now it becomes clear why the performance data lie in fairly wide ranges:

7200 RPM (Rotate per Minute) HDD SATA - 50-75 IOPS;
10K RPM HDD SAS - 110-140 IOPS;
15K RPM HDD SAS - 150-200 IOPS;
SSD (Solid State Drive) - tens of thousands of IOPS for reading, hundreds and thousands for writing.

However, the nominal disk IOPS is still far from accurate, since it does not take into account the differences in the nature of the loads in individual cases, which is very important to understand.

Also, for a better understanding of the topic, I recommend that you read another useful article from amarao: How to measure disk performance correctly, thanks to which it also becomes clear that latency is not completely fixed and also depends on the load and its nature.

The only thing I would like to add:

When calculating hard disk performance, you can neglect the decrease in IOPS with increasing block size, why?

We have already realized that for "spinning" drives, the time required for random read or write is made up of the following components:

T (I / O) = T (A) + T (L) + T (R / W).

And then we even calculated the performance for random reads and writes in IOPS. But we essentially neglected the T (R / W) parameter there, and this is no coincidence. We know that, let's say, sequential reads can be achieved at a speed of 120 megabytes per second. It becomes clear that a 4KB block will be read in about 0.03 ms, the time is two orders of magnitude less than the time of other delays (8 ms + 4 ms).

Thus, if with a block size of 4KB we have 76 IOPS(the main delay was caused by the rotation of the drive and the head positioning time, and not by the read or write process itself), then with a block size of 64KB, the IOPS drop will not be 16 times, as with sequential reads, but only by a few IOPS. Since the time spent directly reading or writing will increase by 0.45 ms, which is only about 4% of the total latency.

As a result, we get 76-4% = 72.96 IOPS, which, you must agree, is not at all critical in the calculations, since the drop in IOPS is not 16 times, but only a few percent! And when calculating the performance of systems, it is much more important not to forget to take into account other important parameters.

Magic conclusion: when calculating the performance of storage systems based on hard disks, you should select the optimal block (cluster) size to provide the maximum bandwidth we need depending on the type of data and applications used, and with a drop in IOPS when the block size increases from 4KB to 64KB or even 128KB can be neglected, or taken into account as 4 and 7%, respectively, if they play in the task at hand important role.

It also becomes clear why it doesn't always make sense to use very large blocks. For example, for video streaming, a 2MB block size may not be the best option. Since the drop in the number of IOPS will be more than 2 times. Among other things, other degradation processes in arrays will be added, associated with multithreading and computational load when distributing data over the array.

Optimal block (cluster) size

The optimal block size should be considered depending on the nature of the load and the type of application used. If you are working with small data, for example with databases, you should choose the standard 4KB, but if we are talking about streaming video files, it is better to choose a cluster size of 64 KB or more.

It should be remembered that the block size is not as critical for SSDs as for standard HDDs, since it allows you to provide the required bandwidth in view of a small number of random IOPS, the number of which decreases slightly with increasing block size, in contrast to SSDs, where there is an almost proportional dependence ...

Why 4KB standard?

For many drives, especially solid-state drives, performance values, for example, writing, starting from 4 KB, become optimal, this can be seen from the graph:

While for reading, the speed is also quite substantial and more or less tolerable starting from 4KB:

It is for this reason that 4 KB block size is very often used as a standard one, since with a smaller size there are large performance losses, and with an increase in the block size, in the case of working with small data, the data will be distributed less efficiently, occupy the entire block size and drive quota. will not be used effectively.

RAID level

If your storage system is a RAID array a certain level, then the system performance will depend to a large extent on what kind of RAID level was applied and what percentage of the total number of operations falls on write operations, because it is writing that causes performance degradation in most cases.

So, with RAID0, only 1 IOPS will be spent on each input operation, because the data will be distributed across all drives without duplication. In the case of a mirror (RAID1, RAID10), each write operation will already consume 2 IOPS, since the information must be written to 2 drives.

In more high levels RAID losses are even more significant, for example, in RAID5 the penalty factor will be already 4, which is due to the way the data is distributed across the disks.

RAID5 is used instead of RAID4 in most cases because it distributes parity (checksums) across all disks. In a RAID4 array, one of the disks is responsible for all the parity, while the data is spread over more than 3 disks. This is why we apply a penalty factor of 4 on a RAID5 array, as we read data, read parity, then write data, and write parity.

In a RAID6 array, everything is the same, except that instead of calculating the parity once, we do it twice, and thus we have 3 reads and 3 writes, which already gives us a penalty factor of 6.

It would seem that in an array like RAID-DP everything will be the same, since it is essentially a modified RAID6 array. But it was not there ... The trick is that a separate file system WAFL (Write Anywhere File Layout), where all write operations are sequential and performed on free space. WAFL will basically write new data to a new location on disk and then move pointers to the new data, thus eliminating read operations that must take place. In addition, a log is written to NVRAM, which monitors write transactions, initiates a write, and can restore them if necessary. There is a write to the buffer at the beginning, and then they are already "merged" to disk, which speeds up the process. Probably the experts at NetApp can enlighten us in more detail in the comments, due to which the savings are achieved, I have not yet fully figured out this issue, but I remember that the RAID penalty factor will be only 2, not 6. The "trick" is quite essential.

With large RAID-DP arrays, which consist of dozens of disks, there is a concept of reducing the "parity penalty" that occurs when writing parity. For example, as a RAID-DP array grows, fewer disks are required to be allocated for parity, which will lead to lower losses associated with parity records. However, in small arrays, or in order to increase conservatism, we can neglect this phenomenon.

Now, knowing about the IOPS losses as a result of using one or another RAID level, we can calculate the performance of the array. However, please note that other factors such as interface bandwidth, non-optimal distribution of interrupts across processor cores, etc., throughput of the RAID controller, exceeding the allowed queue depth can have a negative impact.

If these factors are neglected, the formula will be as follows:

Functional IOPS = (Baseline IOPS *% Write / RAID Penalty) + (Baseline IOPS *% Read), where Baseline IOPS = Average IOPS of Drives * Number of Drives.

Let's calculate, for example, the performance of a RAID10 array of 12 SATA HDDs, if we know that 10% of writes and 90% of reads occur simultaneously. Let's say the disk provides 75 random IOPS, with a block size of 4KB.

Initial IOPS = 75 * 12 = 900;
Functional IOPS = (900 * 0.1 / 2) + (900 * 0.9) = 855.

Thus, we see that with a low recording intensity, which is mainly observed in systems designed for content delivery, the influence of the RAID penalty factor is minimal.

Application dependency

The performance of our solution can greatly depend on the applications that will be executed later. So it can be processing transactions - "structured" data that is organized, consistent and predictable. Often times, you can apply the principle of batch processing to these processes, distributing these processes over time so that the load is minimal, thereby optimizing the consumption of IOPS. However, in recent times there are more and more media projects where the data is “not structured” and require completely different principles of their processing.

For this reason, calculating the required performance of a solution for a specific project can be a daunting task. Some of the storage vendors and experts argue that IOPS does not matter, as customers overwhelmingly use up to 30-40 thousand IOPS, while modern systems storage provides hundreds of thousands and even millions of IOPS. That is, modern storages meet the needs of 99% of customers. Nevertheless, this statement may not always be true, only for the business segment that hosts storage locally, but not for projects hosted in data centers, which often, even when using ready-made storage solutions, should provide sufficient high performance and fault tolerance.

In the case of placing a project in a data center, in most cases, it is still more economical to build storage systems on your own based on dedicated servers than to use ready-made solutions, since it becomes possible to more efficiently distribute the load and select the optimal equipment for those or other processes. Among other things, the performance indicators of ready-made storage systems are far from real, since most of them are based on data from synthetic benchmark profiles, using 4 or 8 KB block size, while most client applications now run in environments with block sizes from 32 to 64 KB.

As you can see from the graph:

Fewer than 5% of storage systems are configured with less than 10KB chunks and less than 15% use less than 20KB chunks. In addition, even for a specific application, it is rare that there is only one type of I / O consumption. For example, the database will have different I / O profiles for different processes (data files, logging, indexes ...). This means that the declared synthetic tests of system performance may be far from the truth.

What about delays?

Even if we ignore the fact that the tools used to measure latency tend to measure average latency times and miss the fact that one single I / O in some process can take much longer than others, thus slowing down the whole process, they do not take into account the fact that how much the I / O latency will change depending on the block size... Among other things, this time will also depend on specific application.

Thus, we come to another magical conclusion that not only is the block size not a very good characteristic when measuring the performance of IOPS systems, but also latency can turn out to be a completely useless parameter.

Okay, if neither IOPS nor latency is a good measure of storage performance, then what?

Only a real test of application execution on a specific solution ...

This test will be the real method that will surely allow you to understand how productive the solution will be for your case. To do this, you need to run a copy of the application on a separate storage and simulate the load for a certain period. This is the only way to obtain reliable data. And of course, you don't need to measure storage metrics, but application metrics.

Nevertheless, taking into account the above factors affecting the performance of our systems can be very useful when choosing storage or building a specific infrastructure based on dedicated servers. With a certain degree of conservatism, it becomes possible to find a more or less realistic solution, to exclude some technical and software flaws in the form of a sub-optimal block size when splitting or sub-optimal work with disks. The solution, of course, will not 100% guarantee the calculated performance, but in 99% of cases it will be possible to say that the solution will cope with the load, especially if we add conservatism depending on the type of application and its features in the calculation.

Against the background of the rapid development of digital technologies, modern computer technology is quickly becoming obsolete. In this article, we will look at possible ways to upgrade your PC to increase its performance.

Introduction

V modern world The rapid development of digital technologies has led to the fact that a newly acquired computer, even the most productive at the time of purchase, becomes hopelessly obsolete after a very short time. If the service life is normal household appliances is at least 10 years, then for computers it decreases 2-3 times.

When a computer is used for a narrow range of tasks, for example, to work with office applications and the Internet, the issue of updating is not as acute as in the case of universal use. If the computer is used as a gaming platform, video and photo processing, listening to music and watching movies, then the subsequent update of components becomes desirable and necessary.

Determining when the time has come for such an upgrade-update (English Upgrade - improvement, modernization) is very simple even without complex measurements: if a new game does not start or runs too slowly, and new programs take a long time to load, then the computer needs an upgrade. Do not forget that the Internet is becoming more and more interactive every year, acquiring at the same time high-definition videos, high-resolution photos, high-quality on-line games and other content that places high demands on computer system resources.

Components affecting performance

Inside system unit computer, the same metal case, which is usually located at the bottom of a computer table or simply on the floor, there is a series of components from which, like from children's blocks, the required functionality is assembled.

The basis of the entire system is the motherboard, into which the central processor, RAM and video card are connected. Replacing these components (all or selectively) can significantly improve the overall performance of the system.

Another component that has some effect on system performance is the hard drive. Despite the fact that its main function is to store information, the speed of its reading and writing strongly affects the launch of applications and the operating system as a whole.

Other components of the system unit do not affect performance and are replaced for other reasons.

Identifying installed components

Before going to the store for new components, you need to decide in advance what you can count on and whether it is advisable to update your computer at all. Moreover, in order to install a new processor, graphics card or increase the amount of RAM, you need to make sure that motherboard your computer supports certain models of the new components. To understand all this, you need to understand what parts are currently in your car and, based on the information received, assess the prospects for a future upgrade.

There are several ways to determine the configuration of your computer, but the easiest of them is to use special utilities, which will not be difficult to find. As part of this material, we will use free program Piriform Speccy.

As you can see from the figure, after launching, the utility shows general information by system resources and installed operating system. Also, with the help of it, you can easily get more detailed data about the installed processor and its characteristics, RAM, video card and other necessary components.

Now that we have a tool in our hands that allows us to determine all the necessary parameters of a computer, let's take a closer look at the main details that primarily affect the performance of a PC.

CPU

The central processor is a kind of computer brain and carries out all the mathematical calculations. Replacing this component with a faster model takes your PC to a whole new level and improves overall performance in all applications. For example, it will speed up video transcoding and editing, loading the operating system and many other programs, performing complex engineering calculations, and many other tasks.

In addition, new models, as a rule, have lower power consumption and, as a result, heat up less, which in turn makes it possible to install a quieter cooling system.

Physically, the central processing unit (CPU) is a large square-shaped microcircuit. It always has a radiator for heat dissipation and a fan, the dimensions of which directly depend on its heat dissipation, which is the greater, the higher the clock frequency.

Central processing units for personal computers are manufactured by two companies - AMD and Intel. Intel products support the latest technologies and higher performance, but AMD attracts buyers with a more favorable price / performance ratio.

Now is the time to use the Speccy utility and find out the manufacturer of the processor installed in your system, as well as its main characteristics. We can see all this information in the main window of the program next to the item CPU.

Here we are interested in four main parameters, on the basis of which we will draw conclusions about the possibility of upgrading this component:

  • Manufacturer ... In our example, this is Intel.
  • Model and codename ... In our case, this is the Corei7 of the Lynnfield family.
  • Connector design or type ... Here it is Socket 1156.
  • Clock frequency ... In our example, 2.8 GHz.

Now what can we learn from this information? Firstly, processors from rival AMD are not good for us. Secondly, we can only install Intel processors designed for Socket 1156 into this motherboard. Thus, we have quite seriously narrowed the range of searches for a possible contender for a future upgrade. By the way, you can always determine the manufacturer by the type of socket.

Now it remains to understand the feasibility of this update. For example, digging into search engines or in the same Wikipedia, you can find out that Intel at the moment has practically stopped producing processors for the Socket 1156 socket, which means that this platform has no future, and new generation processors are not being produced for it. Moreover, in our example, a processor of the Corei 7 family is installed, which is the most productive in the Intel lineup. And the oldest representative of the Lynnfield Corei 7 line in our design has a clock frequency of 3.07 GHz, which is only 207 MHz more than the existing copy.

After analyzing the information received, we can say with confidence that for our example, upgrading the processor without replacing the motherboard is impractical.

Generally for Intel processors the upgrade should be done only if the board uses an LGA 775 socket or another one with a higher numerical index. Upgrading systems based on LGA 478 and others currently does not make any sense. The most promising connectors at the moment can be called LGA 1155 for LGA 2011 enthusiasts.

Likewise, for AMD products, the upgrade makes sense for AM2 + and higher sockets. The most promising connectors are AM3 and AM3 +.

Video card

The next component that affects performance is the graphics card. Its main task is to form an image on the monitor. In most cases, modernization is required only for fans of modern three-dimensional computer games. If the game runs slowly (slows down), then the main reason is the insufficient performance of the video card.

Modern video cards are complex computing devices and can perform many of the functions of the main computer: they have their own specialized graphics processor, a fan with a heatsink, and their own video memory chips. Physically, the graphics adapter is a fairly large board with electronic elements located on it and, if necessary, additional power connectors. It is inserted into a special connector on motherboard computer. The wire from the monitor is connected to the video card.

In general, there are three types of connectors to which a video card is connected: a very old PCI, outdated and practically discontinued AGP, and a modern PCI-Express x16 (PCI-E X16). Upgrading your computer's graphics subsystem only makes sense if its motherboard has a modern PCIE connector. It is worth upgrading the video adapter by buying a board with a newer and more efficient graphics processor, but changing the video card just for the sake of increasing the amount of video memory is not worth it.

To determine the type of graphics connector in your system, let's again use the Speccy utility by selecting the item on the left Motherboard... Now find in the right part of the program window the item PCI data.

In our example, the motherboard is equipped with two PCIEX16 connectors at once. This makes it possible, if desired and financially feasible, to install two video cards simultaneously in SLI (Nvidia) or CrossFire (AMD) modes, combining their computing power. If in your case there is no such connector, then upgrading the video card without replacing the motherboard makes no sense.

Despite the fact that there are many video cards from various manufacturers on the market (ASUS, Gigabyte, MSI, Sapphire, Powercolo and others), in fact, the basis for these devices is graphics processors, produced by two American companies: AMD (ATI) and nVidia ...

All video cards based on AMD chips are named Radeon HD XXXX. XXXX is a four-digit number, the first digit of which indicates the generation to which the video card belongs. The higher it is, the more modern the map is. The second number indicates the adapter family. The higher it is, the more powerful and productive the graphic solution is, but also the more expensive. The third digit indicates the subfamily of the adapter. Here the principle is also preserved - the higher the better. At the moment, the latest generation of video cards with AMD chips are the Radeon HD 7xxx.

Video cards based on graphics solutions from nVidia are called GeForce GT / GTS / GTX XXX. As in the previous case, XXX is a number in which the first digit denotes the generation, the second is the family to which the graphics accelerator belongs. The higher these numbers are, the more modern and efficient the adapter is. The fastest products are prefixed with a GTX prefix. Latest decisions on GPUs nVidia has a GTX 5xx index. True, very soon new boards of the sixth generation will become available to users.

You need to know that all modern high-performance graphics solutions require additional power supply. This means that your PSU must have adequate free connectors and sufficient power headroom. Therefore, before buying a new video card, find out if it needs additional power and what is the minimum power that your computer's power supply should have. All this information can be found on the Internet or ask a sales assistant.

Unfortunately, within of this material, we will not be able to get to know you in more detail with model lines video cards due to the volume of this issue. If you want to study this issue yourself in more detail, then refer to our article ,. Otherwise, take a knowledgeable acquaintance with you to the store or seek help from a consultant who will tell you possible options.

At the end of this question, let's look at our example, the feasibility of upgrading a video card.

As you can see from the figure, our graphics adapter has a name GeForce GTX 580. This means that the test computer is equipped with graphic solution based on nVidia logic and belongs to the most recent generation. Moreover, the number 8 in the index indicates that this is the most productive solution among the single-chip products of this company. At the moment, such a video card has no performance problems in all modern games and does not need to be upgraded.

RAM

The third key component is RAM. In most cases, the more memory the better. At the moment, its minimum volume, at which it is possible to work comfortably with programs, is 2 gigabytes (GB). Computers with 1 GB or even 512 MB significantly increase their performance when increasing the amount of RAM, since its amount affects the download speed and subsequent operation of programs and the operating system.

Physically, RAM is a narrow rectangular board (memory module) with microcircuits soldered to it. It is inserted into a special connector on the motherboard. The amount of memory depends on the number of soldered microcircuits on one module, therefore, with the same appearance they can be of different sizes. Inexpensive motherboards allow you to connect only two memory modules, and more advanced ones - 4 or even 8.

Before increasing the amount of memory, first you need to determine its type, which can be SDRAM, DDR, DDR2 and DDR3. The first two varieties are already outdated and out of production, DDR2 is still widespread, but is being actively replaced by the new DDR3 standard. At the same time, the old memory standard costs almost twice as much as the modern one. Externally, the connectors of different types of memory differ in the number of contacts and their shape, so if your computer uses DDR2, then another type will not suit you.

Then you need to understand how many memory slots your motherboard has and the number of free slots. If there are only two connectors on the motherboard and both are occupied, then you will have to replace the old modules with new ones with a larger volume. If there are 4 slots and two of them are free, you can simply add new ones to the existing memory modules.

It should be remembered that the RAM strips should be installed in pairs to enable the dual-channel mode, which significantly increases its operation speed and bandwidth.

Now let's again use the already familiar utility to get all the information you need about the RAM installed in the system. To do this, just click on the tab RAM and all the necessary parameters will appear in front of you.

From our example, you can see that the test system has 8 GB of DDR3 RAM. At the same time, the system board has 4 slots for installing modules, of which two are still free, which makes it possible to deliver additional modules at any time.

HDD

The final component that can increase the performance of your system is your hard drive. To a greater extent, replacing the hard drive with a faster drive will affect the speed of loading the operating system and launching applications. On the very same performance inside programs hard the disk has practically no effect.

Modern data storage devices are of two types: magnetic (HDD) and solid-state (SSD). The first ones are the most common, voluminous and affordable. The latter are many times faster than the former, however, they have smaller data storage volumes and are many times more expensive.

Also, hard drives can have different interfaces for connecting to the motherboard. There are only two of them - a parallel interface (IDE, ATA, Ultra ATA) or a serial interface (SATA, SATA II or SATA III). It is the serial interface that is the modern standard for connecting drives. If your computer does not have such connectors, then it makes no sense to upgrade to increase performance, since now the developers no longer release new modern solutions with the IDE interface.

Undoubtedly, the most tangible difference when upgrading a storage medium you will receive after installation solid state drive... But as we said earlier, this is not a cheap pleasure. On the other hand, with the still high cost of an SSD, you can buy a small disk by installing operating system and required applications... To store all other data, you can use a classic magnetic drive, since the cost of storing one megabyte of data on such a device is much lower.

Unfortunately, the Speccy utility does not show the number and availability of certain interfaces for connecting information storage devices on the motherboard. And yet, some useful information it is quite possible to get out of it. First, in the section Hard drives, you can see what devices are already installed on your system. And even if there are no hard drives with SATA interface, this does not mean at all that such connectors are missing on your motherboard.

If your computer is no more than five to six years old, then most likely they are. In order to nevertheless accurately establish their presence or absence, it is enough to open Device Manager Windows. To do this, on the icon My computer right click and select Properties, and then, in the window that opens, select the item Device Manager... In the list of devices, almost at the top, we find the line IDE ATA / ATAPI controllers... Opening it, you can see the presence of installed controllers on the motherboard.

As you can see from the figure, in our case the motherboard has 6 SATA ports and a dual-lane IDE connector with the ability to connect two devices to it at once.

Laptop upgrade

Unlike a classic desktop computer (system unit), laptop upgrade is very limited. In most cases, users can only increase the amount of RAM and replace the hard drive. At the same time, to install additional memory modules, as a rule, there is only one slot, at best two, which imposes restrictions on the maximum amount of "RAM".

As you can see, such key components as a processor and a video card cannot be upgraded in such devices. That is why when buying a laptop, you should immediately decide for what tasks it will be used, since it is practically impossible to increase its performance later.

Conclusion

When the performance of your own computer ceases to suit you, and you are thinking about upgrading it, first of all, evaluate the feasibility of replacing individual components. Under certain conditions, you run the risk of wasting your own money and not getting the expected productivity gains.

Remember the so-called "bottleneck" rule. Its essence is that the maximum performance of your computer depends on its weakest component. For example, having bought a productive video card and at the same time, leaving a low-power processor, you can be sure that your new product will not reveal its potential in such a bundle. That is, in this case, the maximum computing capacity will be limited by the capacity of the central processor.

Any of the components discussed in this material can become such a "bottleneck" in your computer, be it a processor, a video card, the amount of RAM or a slow hard drive. Consider this and do not overpay for components that will not be able to fulfill their potential in your system.

In general, your costs, as well as the complexity of upgrading your computer, will depend on what you expect from the future upgrade. As a rule, the most demanding system resources are computer games and if it is this factor that became the reason for improvement, then the most serious financial investments await you. The least expensive option is to increase the amount of RAM.

In any case, if you are not too versed in the internal structure of the computer, before buying new parts, be sure to consult with knowledgeable acquaintances or sales consultants of the respective stores. Moreover, the quality of this consultation will directly depend on the completeness of the technical information that you provide them.

For all those who still want to fully control the process of choosing new components, not trusting the opinion of other people, we recommend that you familiarize yourself with the device in more detail. personal computer and the characteristics of its key components.

The most basic parameters that affect the speed of a computer are - hardware... It depends on what kind of hardware is installed on the PC how it will work.

CPU

It can be called the heart of the computer. Many are simply sure that the main parameter that affects the speed of a PC is clock frequency and this is correct, but not completely.

Of course, the number of GHz is important, but the processor also plays an important role. You shouldn't go into too much detail, let's simplify: the higher the frequency and the more cores, the faster your computer.

RAM

Again, the more gigabytes of this memory the better. Random access memory or RAM for short is temporary memory where program data is written for quick access... However, after shutdown PC they are all erased, that is, it is fickle - dynamic.

And here there are some nuances. Most, in pursuit of the amount of memory, put a bunch of bars from different manufacturers and with different parameters, thereby not getting the desired effect. So that the performance gain is maximum, you need to set planks with the same characteristics.

This memory also has clock frequency, and the larger it is, the better.

Video adapter

He might be discrete and built-in... The built-in is found on the motherboard and its specifications are very poor. They are only enough for regular office work.

If you plan to play modern games, use programs that process graphics, then you need discrete graphics card... By doing so, you will raise performance your PC. This is a separate board that needs to be inserted into a special connector located on the motherboard.

Motherboard

It is the largest board in the block. From her directly performance depends the entire computer, since all its components are located on it or connected to it.

HDD

This is a storage device where we store all our files, installed games and programs. They are of two types: HDD andSSD... The latter work much faster, consume less power and are quiet. The former also have parameters that affect performance PC - rotation speed and volume. And again, the higher they are, the better.

Power Supply

It must supply energy to all the components of the PC in sufficient volume, otherwise the performance will decrease significantly.

Program parameters

Also, the speed of the computer is affected by:

  • State established operating system.
  • Version OS.

Installed OS and software should be right tuned in and free from viruses, then the performance will be excellent.

Of course, from time to time you need reinstall system and all software, so that the computer works faster. Also, you need to keep track of software versions, because old ones can work. slowly because of the errors they contain. It is necessary to use utilities that cleanse the system from garbage and increase its performance.

V modern conditions the growth of the company's profit is the main necessary trend in the development of enterprises. Profit growth can be provided in various ways, among which we can separately highlight the more efficient use of company personnel.

Productivity is a measure of the performance of a company's workforce.

General idea

Labor productivity according to the calculation formula is a criterion by which the productivity of labor use can be characterized.

Labor productivity is considered the efficiency that labor has in the production process. It can be measured by a certain amount of time required to produce a unit of output.

Based on the definition contained in the encyclopedic dictionary of F.A. Brockhaus and I.A.

L. Ye. Basovskiy labor productivity can be defined as the productivity of personnel possessed by an enterprise. It can be determined by the amount of products that are produced per unit of labor time. This indicator also determines labor costs that can be attributed to a unit of output.

Productivity is the amount of products produced by one employee for a set period of time.

It is a criterion that characterizes the productivity of a certain living labor and the effectiveness of production work in accordance with the formation of the product per unit of labor time spent on their production.

The efficiency of work increases on the basis of technological progress, the line of introducing new technologies, increasing the qualifications of employees and their financial interest.

Analysis stages

Assessment of labor productivity consists of the following main stages:

  • analysis of absolute indicators for several years;
  • determining the impact of certain factor indicators on the dynamics of productivity;
  • determination of reserves for performance gain.

Main factors

The main most important indicators of productivity, which are analyzed in modern enterprises operating in market conditions, may be such as the need for full employment of personnel, high output.

Output is the value of productivity per unit of labor input. It can be determined by correlating the number of products produced or services rendered that were produced in a certain unit of time.

Labor intensity is the ratio between the cost of working time and the volume of production, which characterizes the cost of labor per unit of product or service.

Calculation methods

In order to measure the performance of work, three methods of calculating performance are used:

  • natural method. It is used in organizations producing homogeneous products. This method takes into account the calculation of work productivity as a correspondence between the volume of products made in natural terms and the average number of employees;
  • the labor method is used if a huge amount of product with a frequently changing assortment is carried out at work sites; the formation is determined in standard hours (the amount of work multiplied by the time standard), and the results are summed up according to different types product;
  • cost method. It is used in organizations that produce heterogeneous products. This method takes into account the calculation of work productivity as the correspondence between the volume of products made in the value formulation and the average number of employees.

In order to assess the level of work performance, the concept of personal, complementary and generalizing characteristics is applied.

Private properties are those time costs that are required to release a unit of production in natural terms for a single person-day or person-hour. Auxiliary properties take into account the time spent on the implementation of a unit of a certain type of work or the amount of work performed per unit period.

Calculation method

Among possible options labor productivity, the following indicators can be distinguished: production, which can be average annual, average daily and average hourly in relation to one employee. There is a direct relationship between these signs: the number of working days and the length of the working day can predetermine the value of the average hourly output, which, in turn, predetermines the value of the average annual output of the employee.

Labor productivity according to the calculation formula is as follows:

VG = KR * PRD * VSP

where VG is the average annual output of the worker, t. p .;

КР - number of working days, days;

HSP - average hourly output, t. per person;

ПРД - the duration of the work shift (days), hours.

The level of impact of these conditions can be determined using the method of chain substitution of indicators, the method of absolute differences, the method of relative differences, as well as the integral method.

Having information about the level of impact of different conditions on the studied indicator, it is possible to establish the level of their impact on the volume of production. To do this, the value describing the impact of any of the conditions is multiplied by the number of employees in the company based on the average value.

The main factors

Further research on work productivity is focused on detailing the impact of different conditions on a worker's output (average annual). Conditions fall into two categories: extensive and intensive. Extensive factors include factors that have a great impact on the use of working time, intensive factors are factors that have a great influence on hourly work efficiency.

The analysis of extensive factors is focused on detecting the costs of labor time from its non-productive use. The costs of labor time are established by comparing the planned and practical fund of labor time. The results of the impact of costs on the production of a product are established by multiplying their number of days or hours by the average hourly (or average daily) production according to the plan per worker.

The analysis of intensive factors is focused on the detection of conditions associated with a change in the labor intensity of the product. Reducing labor intensity is the main condition for increasing work productivity. Feedback is also observed.

Factor analysis

Let's consider the basic formulas of productivity of factors of production.

To consider the factors of influence, we use the methods and principles of calculations generally recognized in economic science.

The formula for labor productivity is presented below.

where W is labor productivity, i.e. per person;

Q is the volume of products that were produced in value terms, t. P .;

T is the number of personnel, people.

Let's extract the Q value from this productivity formula:

Thus, the volume of production changes depending on changes in labor productivity and the number of personnel.

The dynamics of changes in the volume of production under the influence of changes in the performance indicator can be calculated by the formula:

ΔQ (W) = (W1-W0) * T1

The dynamics of changes in the number of products under the influence of changes in the number of employees is calculated using the formula:

ΔQ (T) = (T1-T0) * W0

General action of factors:

ΔQ (W) + Δ Q (T) = ΔQ (total)

The change due to the influence of factors can be calculated using the factor model of the productivity formula:

PT = Ud * D * Tcm * CHV

where PT is labor productivity, t. p. per person

Ud - the proportion of workers in the total number of personnel

D - days worked by one worker per year, days

Tcm - average working day, hour.

PM - average hourly labor productivity of the worker, t. per person

Basic reserves

Productivity research is carried out in order to establish reserves for its growth. The reserves of the increase can be the following factors affecting labor productivity:

  • increasing the technological level of manufacturing, i.e. adding the latest scientific and technical processes, obtaining high-quality material, mechanization and automation of manufacturing;
  • improving the structure of the company and selecting the most competent employees, eliminating employee turnover, increasing the qualifications of employees;
  • structural changes in manufacturing, which take into account the replacement of a part of single product types, an increase in the weight of the newest product, a change in the labor intensity of the production program, etc .;
  • the formation and improvement of the necessary public infrastructure is the solution of the difficulties associated with meeting the needs of the company, labor societies.

Increase directions

The question of how to increase labor productivity is very relevant for many enterprises.

The essence of the growth of labor productivity at the enterprise is manifested in:

  • change in the amount of production when using a unit of labor;
  • change in labor input per established unit of output;
  • change in salary costs by 1 ruble;
  • reducing the share of labor costs in the cost;
  • improving the quality of goods and services;
  • reduction of production defects;
  • increasing the number of products;
  • an increase in the mass of sales and profits.

In order to ensure the high return of the company's employees, the management needs to provide normal working conditions. The level of human productivity, as well as the efficiency of his labor, can be influenced by a huge number of factors of both intensive and extensive nature. Taking into account these factors affecting labor productivity is necessary when calculating the productivity indicator and reserves for its growth.


2021
maccase.ru - Android. Brands. Iron. news