I am all about performance and speed. I even go so far as to test various ways of doing things in bash (or other programming languages) to optimize the code so it runs faster or more efficiently. I optimize services in the same matter. With my fairly new laptop, it has one of the hybrid graphics cards – intel + nvidia. It works with bumblebee….and cuda….for the most part. Simply running ‘optirun command’ switches to the nvidia graphics card which gives much better performance, not only in graphics intensive stuff…..but even bash shell.
Example: running a simple ‘time ls -la’ in a directory containing 1590 items
[codesyntax lang=”bash”]
real 0m0.095s user 0m0.012s sys 0m0.008s
[/codesyntax]
Example 2: running ‘optirun xterm’ and then running the same command ‘time ls -la’ in the same directory in the new nvidia xterm:
[codesyntax lang=”bash”]
real 0m0.015s user 0m0.000s sys 0m0.012s
[/codesyntax]
And even the glx tests are much faster:
[codesyntax lang=”bash”]
$ glxspheres Polygons in scene: 62464 Visual ID of window: 0xa4 Context is Direct OpenGL Renderer: Mesa DRI Intel(R) Ivybridge Mobile 60.131595 frames/sec - 67.106860 Mpixels/sec 31.846655 frames/sec - 35.540867 Mpixels/sec 31.377641 frames/sec - 35.017447 Mpixels/sec 31.636594 frames/sec - 35.306439 Mpixels/sec
[/codesyntax]
and:
[codesyntax lang=”bash”]
$ optirun glxspheres Polygons in scene: 62464 Visual ID of window: 0x21 Context is Direct OpenGL Renderer: GeForce GTX 660M/PCIe/SSE2 153.415397 frames/sec - 171.211583 Mpixels/sec 158.583456 frames/sec - 176.979136 Mpixels/sec 161.016123 frames/sec - 179.693993 Mpixels/sec 158.156824 frames/sec - 176.503015 Mpixels/sec
[/codesyntax]
To me, this is enough of a performance improvement to use the nvidia graphics card always. Except, there is no way to do that in the new laptop…..at least that I have figured out.
Leave a Reply
You must be logged in to post a comment.