OUR BUSINESS is building mobile solutions for YOUR BUSINESS

The internet is going mobile. Give your business the edge - Whether you need to mobile enable your customers, suppliers or staff, we can design and build YOUR SOLUTION... Websites, iPhone and Android.

Laravel 4 failed to open stream: Permission denied

If you ever see this error in Laravel 4 Homestead, with some other accompanying information, it's a problem you caused... but fortunately the fix is simple.

It took me awhile to figure out what the following really meant... but it means exactly what it says:

file_put_contents(
/home/vagrant/.composer/cache/repo/https---packagist.org/provider-illuminate$config.json):
 failed to open stream: Permission denied http://packagist.org could not be fully loaded, 
package information was loaded from the local cache and may be out of date

First.. the reason you're seeing this, is that you accidently ran the following commands with sudo

composer update
OR
php composer.phar update

Remember to not run these commands using sudo. If you do, the files will be created with the owner as root. Then when you try to run them again without using sudo, they will fail because you really don't have the correct permissions.

The error above baffled me for about an hour, because I was reading the file path as a URL "/home/vagrant/.composer/cache/repo/https---packagist.org/provider-illuminate$config.json", and thinking the problem was with my network or packagist.org. This is actually a file path. There IS a "https--packagist.org" directory... so just cd to "/home/vagrant/.composer/" and run

sudo chown -R vagrant:vagrant https---packagist.org/

Or of course, whatever your actual directory is. Almost guaranteed to fix your problem

If you're seeing these errors and you're not using Homestead, then change the owner:group for the chown command from vagrant:vagrant to whatever your owner:group should be.

Beating the dreaded 504 Gateway Timeout on Laravel Homestead

I'm developing a huge website on Laravel, and wanted to do everything the Laravel way if possible, so I'm using the Laravel Homestead setup. I'm used to working with XAMPP and WAMP Server on Windows, so it took a bit of setup to get all the prerequisites for Laravel Homestead setup, but it's worth the trouble.

Homestead is easy to work with. It uses Virtual Box to give you an Ubuntu 14.04 Server with Nginx installed and ready to go. In addition, it maps your domain roots to windows folders, so I can use my normal Windows PHP tools (I use Eclipse most of the time). Every Laravel website already has the normal .gitignore file installed at the root, so using GIT for version control is simple and painless to setup.

However, debugging turned out not to be quite as painless. I setup XDegug as I normally do, and it worked. But then I noticed that when I stopped at breakpoints, Nginx would invariably timeout, and when it did, something happened to my XDebug session. Sometimes I could just restart it by including the Session code in the next URL, but sometimes even that didn't work, and the only way I could get XDebug running again was to terminate and remove the session and relaunch another session. Annoying and time consuming, but not fatal.

I was also running a bunch of commands that took a long time to run. I was able to get them to complete by increasing my PHP timeout, but I'd still get the 504 Gateway Timeout error in the browser.

It took me many hours of pain and suffering to finally figure out how NOT to get 504 Gateway Timeout errors when developing with Laravel on their Homestead setup. It seems many developers are running into the same things I did, and none of the stackoverflow.com or other question/answer sites (including Laravels) seemed to have the answer. Hopefully, this will save you the same pain.

If you use Putty then cd /etc/nginx, sudo nano nginx.conf, and add the following lines into your Nginx setup in the http # Basic Settings section ( I put it just under the keepalive_timeout setting, but it doesn't really matter as long as it's in the http section):

http {

        ##
        # Basic Settings
        ##

    client_header_timeout 3000;
    client_body_timeout 3000;
    fastcgi_read_timeout 3000;
    client_max_body_size 32m;
    fastcgi_buffers 8 128k;
    fastcgi_buffer_size 128k;

If 3000 is too much for you, then adjust as needed, but this works well for me... NOTE... you definitely don't want these setting in your production environment.

Happy coding with Laravel.

Clearing Cache in Drupal 7

Are you tired of being told the fix every Drupal problem is to clear your cache, and then find the instructions for clearing it require you to be logged in and your admin pages working.... but that's the problem? Anyway, it's easy.... if you have access to the database. Sure you can do it the way it tells you on the Drupal help pages (manually go to each table with a table name that starts with cache and truncate them. If you've got 20 minutes to waste every time you need to do it, go ahead. Or you can run the following commands and do it all at once:
 
DELETE FROM `cache`;
DELETE FROM `cache_block`;
DELETE FROM `cache_bootstrap`;
DELETE FROM `cache_field`;
DELETE FROM `cache_filter`;
DELETE FROM `cache_form`;
DELETE FROM `cache_image`;
DELETE FROM `cache_menu`;
DELETE FROM `cache_page`;
DELETE FROM `cache_path`;
DELETE FROM `cache_update`;
DELETE FROM `cache_views`;
DELETE FROM `cache_views_data`;

You can also do it by using a .php file. Just put this in your site root (I call it drupal_clear_cache.php) and run it by calling http://yourdomain.com/drupal_clear_cache.php. Just remember to delete as soon as your done if your site is available publically.
 


// define static var
define('DRUPAL_ROOT', getcwd());
// include bootstrap
include_once('./includes/bootstrap.inc');
// initialize stuff
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
// clear cache
drupal_flush_all_caches();


Hope that saves you some time

BG Double Adsense

One of our websites was fixed width (980px), but as I worked on it, I noticed a lot of unused space (my monitor is 2048x1152). As the website depends on advertising revenue to continue to be free, it's important to maximize the revenue. So, I designed a Joomla module that is able to detect the user's browser window size and add additional Google Adsense windows on each side of the main content, if the user's browser window is big enough. It has more than doubled our ad revenue. Hope it will do the same for you.

  Brad Gies


You can use BG Double Adsense the same way you use the other Joomla! Adsense modules. IE - put your Google Adsense code into the module and display it the module position. BUT.... if you use a fixed width on your website (which most of us seem to do), BG Adsense can display extra Google Adsense "windows" alongside your normal body content, if the user's browser window is big enough.

BG Double Adsense works by injecting Javascript into your page. The Javascript compares the actual size of the browser window and then compares it to the size of your websites' body tags. It then decides if there is enough room between the body content and the edge of the browser window to display the Google Adsense ad you configured. If there is room for the ad to be fully displayed, a div tag is injected into the HTML, and the Google Adsense code is activated inside the new div.

Optimizing Your MySQL database

First of all, I highly recommend Rackspaces' Cloud Servers. We've been using them for many months now, and they are great to work with. It's absolutely painless to resize your server, add or delete RAM, and the bandwidth charges are incredibly reasonable.In our case, all of our websites are relatively low volume, so we have never tested the limits of the cloud servers but we do have a lot of websites running on our server, and we have the additional complications of running the web server, mail server and databases all on the same machine.

Of course, like most people we don't want to pay more than we need to for our web presence, so we run a Cloud Server and we've sized it to only have 1 Gig of RAM. Now, when you think about running the web server, mail server and the databases all on that configuration, you can see that we have to be a little creative to have good performance. In addition, one of our websites (NoCrappyApps.com) is integrated with the Android Market and we run several daemons constantly to keep the data as fresh as possible, so this poor little server is really being pushed a bit.

Now, with our configuration, we determined that the maximum RAM we could allow MySQL to use was 384M, and we currently have 22 databases in that MySQL instance. Most of the web sites are Joomla 1.5 or Joomla 1.6 sites, and many of them have over 200 databases tables. We happen to be a little obsessed with knowing what our users are using (because that's how we know which features are popular), so we log almost everything, and then run statistics routines every night to give ourselves and our users up to date information.

Obviously, we will have to add more memory to the server in the next few months, but Rackspace gives us options there also. We could just resize our existing Cloud Server to give it 2G of RAM, but for the same price, Rackspace gives us the option of adding another server with 1G of RAM. That has the advantage of being able to move all the databases to the new server, and moving the databases means that we can configure the new server specifically for MySQL and the old server configurations can be changed so that it is optimized for just the web server and mail server. We'll probably move to having the second server although it will be a little more setup work in the beginning. We use an Ubuntu server, and Rackspace makes it easy to setup and configure a new server. Actually, I should mention that Rackspace has really good documentation on setting up servers. I've been able to setup a new server and have it completely configured in less than 2 hours, and at the time, I had very little experience with cloud servers. In the beginning, I actually just setup about 3 or 4 test servers and then deleted them. I think Rackspace charged me about 4 cents per server (their charges are based on a per server per hour basis, so you only have to pay for what you actually use... and did I mention that their charges are incredibly low?).

Ok... so how the heck are we running 15 websites, MySQL with 22 databases, a web server and a mail server all on 1G of RAM? Well.. the first thing we had to do was configure Apache not to hog all the memory. Apache can easily use the entire 1 GIG of RAM and the mail server likes quite a bit of RAM also (all 15 websites have unlimited mail accounts). Once we had those under control, we had the 384M that we thought MySQL could use.

The first thing we had to do was optimize all the SQL. In our case, we're using Joomla 1.5 or 1.6 for all the websites, but Joomla 1.6 was missing a lot of indexes and even Joomla 1.5 doesn't have all the indexes it should. So... we setup MySQL to log the slow queries (set log_slow_queries for the file you want to log to and we set long_query_time = 2 (seconds) in my.cnf). Then every day, I spent about an hour checking the slow queries and figuring out which indexes to add to speed them up. Of course, I have to rewrite some queries to get the performance I wanted.This is an ongoing task, but I now only need to spend about an hour a week on it. I also monitor the MySQL error log.

I should mention also that you should not be using the mysql engine for your Joomla databases. Switch it to mysqli (the I stands for "improved") instead, or just go directly to InnoDB. InnoDB of course, is much better for large databases and tables that do a lot of read and writes simultaneously because it has row locking while the mysql and mysqli engines do table locking (not so good, although smaller databases or low volume databases won't notice so much). At this point, we are still using mysqli because of our memory constraints, so we turned the skip_InnoDb flag on so InnoDb doesn't load and use up some of our RAM even though it's not being used.

Now... you need more information on the innards of MySQL to do more. There are some links here that might help, and you can download a perl script here that is quite helpful in making suggestions for improvements.

 NOTE - DO NOT JUST BLINDLY follow any recommendations without fully understanding what they do.

Of course, eveyone is looking for a magic formula to just tell them what settings to change and there are my.cnf files on the internet for large, small and medium databases. If you're just getting started these are ok, but you still need to go through each line in your my.cnf file and make sure you understand why it's set where it is... and make the changes you need.

  The fact is that there just isn't a magic formula for managing a database.

You need to monitor your actual server use and then make changes based on what's actually happening... and you need to do this continually... this is not a set-and-forget thing. You can run SQL queries to monitor your server. Use the "SHOW STATUS" command to see what your server status, and "SHOW VARIABLES" to see your settings. Or just use Navicat (the paid version) and it has a Server monitor that does the same thing. You don't have to pay for tools but it is nice to be able to see it in a GUI form rather than a command-line format.

Some of the variables I've found to be very important are "query_cache_limit" and "query_cache_size". Read about them and understand them, and you need to experiment and know how your MySQL instance is actually being used to get them set correctly (TIP - the MySQL defaults cannot be relied on, they won't be right for your use). Essentially, the "query_cache_size" is the amount of RAM you allocate for querying caching. NOTE that the only queries that can be cached are queries like "SELECT * FROM some_table". It only caches static queries (queries that don't change), but those are most of your lookup tables which are the ones used the most frequently. You want to get most of your static queries into cache WITHOUT using any more RAM than you need to.

Then you want to play with the "table_open_cache" variable (early versions of MySQL before 5.1 had this set MUCH too low). The MySQL tuning script will help here, but the key status variable to watch to see if you're got it right are "Open_tables", "Opened_tables", "Open_files" and "Opened_files". You want the ratio of "Opened_files" to "Open_files" to be very low (not many "Opened_files), and the same with tables (very few "Opened_tables" compared to "Open_tables".

Also monitor your "key_blocks_used" and "key_blocks_unused" carefully. You don't want too many "key_blocks_unused" or you are just wasting RAM. On the other hand, you don't want the MySQL server to be continually having to go to the disk for it's "key_read_requests" either. Check the ratio of "key_read_requests" (the number of times the server needed a key) to the "key_reads" (the number of times the key was not in cache memory so the server needed to go to the disc drive to find it). The number of "key_reads" compared to the number of "key_read_requests" should be very low (probably much less than 5% if everything is setup properly, and hopefully lower than that). Of course, the variable to concentrate on here is "key_cache_block_size". Increase it if you have too many "key_reads" and decrease it is you have too many "key_blocks_unused".

Anyway that's a brief summary of the most important issues... and I've purposely not given too much detail on them because you need to understand them... AND know how your MySQL server is ACTUALLY being used to get them right. There is NO magic formula unless you simply add dozens of GIG's of RAM and let everything use mega memory if it wants. We can't afford that, but if you can, go for it. Until I win the lottery, I'll have to be content with actually knowing what my server does and how to set it up for that :).

{jcomments on}