Continue reading »]]>

**BLAS (Basic Linear Algebra Subprograms)**

BLAS is a specification for low-level linear algebra routines. The routines are split into three levels:

- Level 1: These routines are those which can be performed in linear time O(n). These include
*vector*operations. - Level 2: These routines are those which can be performed in quadratic time O(n^2). These include
*matrix-vector*operations - Level 3: These routines are those which can be performed in cubic time O(n^3). These include
*matrix-matrix*operations.

Architecture specific implementations (like MKL for Intel chips, for example) take advantage of vectorization and other optimizations available to their respective hardware. It should also be noted that most vendor specific implementations (and some open source, like OpenBLAS) support multithreaded operation for use on shared memory machines. A few BLAS implementations are listed here:

- Netlib BLAS: The official reference implementation (Fortran 77).
- Netlib CBLAS: Reference C interface to BLAS
- Intel MKL: x86 (32- and 64-bit). Optimized for Pentium, Core, Xeon, Xeon Phi.
- OpenBLAS: Optimized successor to GotoBLAS. Supports x86 (32- and 64-bit), MIPS, ARM
- ATLAS (Automatically Tuned Linear Algebra Software): Implementation that automatically creates an optimized BLAS library for any architecture.

**LAPACK (Linear Algebra Package)**

LAPACK is library for performing high level linear algebra operations. It is built “on top” of BLAS, and often the underlying BLAS routines are completely transparent to the programmer. Some of the stuff that LAPACK can do:

- Solve systems of linear equations
- Linear least squares
- Eigenvalue problems
- Singular Value Decomposition (SVD)
- Matrix Factorizations (LU, QR, Cholesky, Schur decomposition)

Both real and complex, single and double precision matrices can be used.

The function naming scheme is* pmmaaa *where:

*p*: precision, S for single, D for double*mm*: two-letter code which denotes what type matrix is expected of the algorithm*aaa*: three-letter code denoting the actual algorithm that is being performed by the subroutine

Like BLAS, architecture specific implementations of LAPACK exist as well (Intel MKL, Apple vecLib, etc). Most of these are multithreaded for use on shared memory machines.

For use on parallel distributed memory machines, scaLAPACK is available (and also in some vendor implementations as well like Intel MKL).

]]>Continue reading »]]>

#include <iostream> int calcGCD(const int& a, const int& b) { if (b == 0) return a; int aPrime = a % b; return calcGCD(b, aPrime); } int main() { int aa, bb; std::cout << "Enter two integers for GCD: "; std::cin >> aa >> bb; std::cin.ignore(); std::cout << "Answer: " << calcGCD(aa, bb) << std::endl; std::cin.get(); }]]>

Continue reading »]]>

#include <iostream> #include <vector> int calcFibSlow(const int& n) { if (n <= 1) return n; else // Recursively calculate F_n return (calcFibSlow(n-1) + calcFibSlow(n-2)); } int calcFibFast(const int& n) { // Create a vector of integers, n+1 because wants zeroth F_n std::vector<int> listN(n+1,0); listN[1] = 1; // Generate up to F_n for (int ii = 2; ii <= n; ++ii) { listN[ii] = listN[ii-1] + listN[ii-2]; } return listN[n]; } int main() { int nVal; // Read in the desired F_n std::cin >> nVal; std::cin.ignore(); // Output the values of both routines std::cout << calcFibSlow(nVal) << std::endl; std::cout << calcFibFast(nVal) << std::endl; std::cin.get(); }]]>

Continue reading »]]>

A reference refers to (is another name for) another object. Any operation on the reference actually operates on the object to which the reference is bounded.

int ivalue = 128; int &refValue = ivalue; int &refValue; // error: a reference must be initialized

A pointer is an object itself (unlike a reference) and “points to” another object.

int *intPtr; // Pointer to an int double *dPtr; // Pointer to a double

Pointers can have four states: 1) It can point to an object, 2) It can point to the location just following the end of an object, 3) null pointer, 4) Invalid (not one of aforementioned states)

The address-of operator (&) is used to get the address of an object:

int intVal = 100; int *intPtr = &intVal;

The dereference operator (*) is used to access the value “pointed to” by the pointer:

int ival = 100; int *intPtr = &ival; cout << *intPtr; // outputs the value pointed at by intPtr

The following code illustrates how to pass a vector of integers by reference to a function:

#include <iostream> #include <vector> using std::vector; using std::cin; using std::cout; int MaxPairwiseProduct(const vector<int>& numbers) { int result = 0; int n = numbers.size(); for (int i = 0; i < n; ++i) { for (int j = i + 1; j < n; ++j) { if (numbers[i] * numbers[j] > result) { result = numbers[i] * numbers[j]; } } } return result; } int main() { int n; cin >> n; vector<int> numbers(n); for (int i = 0; i < n; ++i) { cin >> numbers[i]; } int result = MaxPairwiseProduct(numbers); cout << result << "\n"; return 0; }]]>

// Gradient Descent double numTrain = double(totalPayVec.size()); double learnParam = .0000002; int numIter = 50000; for (int ii = 0; ii < numIter; ii++) { // Iterate the parameter vector pVec = pVec - ((learnParam/(numTrain))*(((xVec * pVec) - yVec).transpose() * xVec).transpose()); // Calculate and output the Cost function value for each iteration MatrixXd sumCost = (((xVec * pVec) - yVec).transpose()) * ((xVec * pVec) - yVec); double costFuncVal = (1/(2.0 * numTrain)) * sumCost(0); std::cout << "Iteration: "<< ii << " " << "Cost Function Value: " << costFuncVal << std::endl; }]]>

Continue reading »]]>

In the course itself, a matlab optimization solver (fminunc) is used in order to learn the parameter vector. However, in this C++ implementation, I first attempted to minimize the cost function using gradient descent:

VectorXd onesVec = VectorXd::Ones(yVec.size()); double numTrain = double(admitVec.size()); VectorXd logVec = VectorXd::Zero((xVec * pVec).size()); VectorXd oneMinusLogVec = VectorXd::Zero((xVec * pVec).size()); // Gradient Descent double learnParam = .001; int numIter = 1000; for (int ii = 0; ii < numIter; ii++) { VectorXd sigVec = xVec * pVec; // Create the sigmoid and log vectors for(int jj = 0; jj < sigVec.size(); jj++) { sigVec(jj) = calcSigmoid(double(sigVec(jj))); logVec(jj) = log(sigVec(jj)); oneMinusLogVec(jj) = log(onesVec(jj) - sigVec(jj)); } // Iterate the parameter vector pVec = pVec - ((learnParam/(numTrain))*((sigVec - yVec).transpose() * xVec).transpose()); // Calculate the Cost Function MatrixXd sumCost = (((-1*yVec).transpose()*logVec) - (onesVec - yVec).transpose()*oneMinusLogVec); double costFuncVal = (1/(numTrain)) * sumCost(0); }

However, I found that for the dataset I was using, gradient descent was converging much too slowly. I decided to try out Newton’s Method:

VectorXd onesVec = VectorXd::Ones(yVec.size()); double numTrain = double(admitVec.size()); VectorXd logVec = VectorXd::Zero((xVec * pVec).size()); VectorXd oneMinusLogVec = VectorXd::Zero((xVec * pVec).size()); // Newton's Method int numIterNewton = 5000; for (int ii = 0; ii < numIterNewton; ii++) { VectorXd sigVec = xVec * pVec; // Create the sigmoid and log vectors for(int jj = 0; jj < sigVec.size(); jj++) { sigVec(jj) = calcSigmoid(sigVec(jj)); logVec(jj) = log(sigVec(jj)); oneMinusLogVec(jj) = log(onesVec(jj) - sigVec(jj)); } // Create the gradient and hessian VectorXd gradJ = ((1/(numTrain))*((sigVec - yVec).transpose() * xVec).transpose()); MatrixXd hMat = ((1/(numTrain))*((sigVec.transpose() * (onesVec-sigVec))(0)) * (xVec.transpose() * xVec)); // Iterate the parameter vector pVec = pVec - hMat.inverse() * gradJ; // Calculate the Cost Function MatrixXd sumCost = (((-1*yVec).transpose()*logVec) - (onesVec - yVec).transpose()*oneMinusLogVec); double costFuncVal = (1/(numTrain)) * sumCost(0); }

Newton’s Method converged much quicker. It should be noted, however, that the dataset I was using had only 2x features (n=2), which made the inverse computation relatively harmless. For larger, more complex problems, the use of Newton’s Method over Gradient Descent may not be so obvious.

Here are some other C++ numerical solvers (not written by me!) located on GitHub.

]]>Continue reading »]]>

- For all configurations, add the path to the SFML headers (
*<sfml-install-path>/include*) to C/C++ » General » Additional Include Directories. - For all configurations, add the path to the SFML libraries (
*<sfml-install-path>/lib*) to Linker » General » Additional Library Directories. - Next link your application to the SFML libraries, Linker » Input » Additional Dependencies, and add “sfml-graphics.lib”, “sfml-window.lib” and “sfml-system.lib”, for example.
- If linked dynamically (external DLLs) copy the SFML DLLs to the executable directory.

Here is the “Hello World” test code to see if SFML is working correctly:

#include <SFML/Graphics.hpp> int main() { sf::RenderWindow window(sf::VideoMode(200, 200), "SFML works!"); sf::CircleShape shape(100.f); shape.setFillColor(sf::Color::Green); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(shape); window.display(); } return 0; }]]>

Continue reading »]]>

Consciousness returns in a hospital bed, my vision cloudy and fading. The nebulous forms of my wife and child are beside me, holding my hand. I can’t make out their words, but I find comfort in the cadence of their voices. Life continues to fall away. It won’t be long now. I try to squeeze the hand in mine, to let them know it will be ok, but to no avail. I’m so sorry. Goodbye.

I awaken in a cloud of light. Does Heaven exist after all? If it does, I better come up with some good excuses as to why I was absent all of those Sundays. That’s strange. I can remember each and every Sunday of my entire life. The other days of the week as well, I can remember them too, in absolute detail. I don’t have to strain to locate these memories; I just… know.

No pearly gates. No angelic choir. No white robed deity.

“Hello?” I ask my surroundings, “Is anyone else here?”

An answer comes, not from without, but from within. I know where I am: I’m everywhere I choose to be. As to others, well… that question doesn’t exactly make sense: It’s just a remnant thought that lingers from my former existence as a human.

I wonder what happened to me. The answer arrives seemingly before I even finish asking myself the question. I remember it vividly now: Planar collision. The fusion of high dimensional space is one of the few things that can wreak havoc on my being. I’ll have to be more careful, next time.

With my mind torn apart, my consciousness collapsed into a mere shadow of its former self: Human. The healing process would soon begin.

Sentience, while no omnipotence, is still preferable to primal instinct: Had I wandered closer to the point of dimensional intersect, I could have found myself a lizard or a cockroach. I remember when I too closely investigated the overlapping event horizons of a binary black hole. Life as a goldfish had been simple and calm, in retrospect. That is, until, I re-integrated myself with the universe. It’s much more traumatic for lower life projections that that of a human, the latter of which are much closer to me in terms of evolution than they could ever imagine.

My absolute knowledge of all things past and present means that I have fully recovered. Unfortunately, I can’t use this infinite knowledge of what has, and is currently, happening to extrapolate information about the future. If I could, perhaps I could avoid setbacks like these.

In the last fleeting moment before all traces of my human existence are absorbed into me, I remember my wife and child. They are here with me. They always have been. They always will be.

It feels good to be back.

]]>Continue reading »]]>