rdoxenham.com Report : Visit Site


  • Ranking Alexa Global: # 2,026,990

    Server:LiteSpeed...
    X-Powered-By:PHP/5.2.17

    The main IP address: 185.38.44.179,Your server United Kingdom,Maidstone ISP:HostDime Limited  TLD:com CountryCode:GB

    The description :rdoxenham.com rhys oxenhams' cloud technology blog search main menu skip to primary content skip to secondary content about me contact details cv what’s new in openstack grizzly? posted on may 25, 201...

    This report updates in 11-Jun-2018

Created Date:2009-10-28
Changed Date:2017-03-04

Technical data of the rdoxenham.com


Geo IP provides you such as latitude, longitude and ISP (Internet Service Provider) etc. informations. Our GeoIP service found where is host rdoxenham.com. Currently, hosted in United Kingdom and its service provider is HostDime Limited .

Latitude: 51.266670227051
Longitude: 0.5166699886322
Country: United Kingdom (GB)
City: Maidstone
Region: England
ISP: HostDime Limited

HTTP Header Analysis


HTTP Header information is a part of HTTP protocol that a user's browser sends to called LiteSpeed containing the details of what the browser wants and will accept back from the web server.

Content-Encoding:gzip
Transfer-Encoding:chunked
Accept-Ranges:bytes
X-Powered-By:PHP/5.2.17
Vary:Accept-Encoding
Server:LiteSpeed
Connection:Keep-Alive
Link:; rel="https://api.w.org/"
Date:Mon, 11 Jun 2018 00:10:09 GMT
Content-Type:text/html; charset=UTF-8

DNS

soa:ns1.thewebhostserver.com. admin.thewebhostserver.com. 2017082808 3600 7200 1209600 86400
ns:ns3.thewebhostserver.com.
ns2.thewebhostserver.com.
ns1.thewebhostserver.com.
ns4.thewebhostserver.com.
ipv4:IP:185.38.44.179
ASN:33182
OWNER:DIMENOC - HostDime.com, Inc., US
Country:GB
mx:MX preference = 0, mail exchanger = rdoxenham.com.

HtmlToText

rdoxenham.com rhys oxenhams' cloud technology blog search main menu skip to primary content skip to secondary content about me contact details cv what’s new in openstack grizzly? posted on may 25, 2013 by admin reply the latest grizzly offering represents the seventh major release of openstack, it just goes to show the power of open source and what can be achieved when we work together. grizzly, for many reasons, is a significant milestone for the project, it’s being seen as a stable enterprise platform and the adoption within the industry is increasing exponentially. this blog post aims to detail some of the latest additions that the grizzly release brought to openstack and i’ll attempt to answer why they are important. firstly, lets look at nova. nova provides the compute resources for an openstack based cloud, it emulates a lot of the functionality that is provided by amazon’s ec2; it’s responsible for scheduling and managing the lifecycle of running instances. an important new feature is the ability to now provision physical resources; and i don’t just mean linux containers, i’m talking about entire physical instances, skipping out the requirement for hypervisor integration. there are some limitations with this approach, predominantly around networking but it’s still early days. being able to scale to massive quantities is a common goal across the openstack project, the nova component is one of the first to start to tackle the big scalability problems that the largest openstack clouds are starting to hit. one technology that was previewed in the folsom release and is now more comprehensive in grizzly is the concept of zones (think aws availability zones), hosts can be grouped into these zones and end-users are permitted to list and select a specific zone to deploy their instances into. a typical use case for this would be to provide availability; an openstack cloud may span multiple datacenters, availability zones can allow users to deploy their infrastructure across these therefore introducing fault tolerance. another concept, which sounds very similar at face value, is host aggregates; like availability zones it allows you to group a set of hosts but instead of grouping for availability we group based on a common feature. an example of this would be, all hosts within an aggregate group all have solid state disks (remember that ephemeral instance storage runs on local disk), a flavour, or ‘instance size/type’ is created that can reference this aggregate group so that end-users can be sure that they are exploiting the common feature. in addition, brand new to grizzly is the concept of nova cells. one of the biggest limitations to scale are the dependencies within nova, for example within a cluster there’s a shared database, message queue and set of schedulers, whilst we can load balance and scale out, there are technical and physical limitations inherent to nova. nova cells attempts to create multiple smaller ‘clouds’ within a larger openstack environment, each providing their own database, messaging queue and scheduler set but all ‘reporting’ to one global api in a tree-like structure. the problem here is that only nova supports the cells implementation, we’re going to need to solve these limitation problems for the rest of the components too. early releases of openstack relied on the compute nodes themselves to have direct database access to update instance information, this posed a security threat as there was concern that compromised hypervisors would have wider access to the openstack environment. a new implementation known as the nova-conductor provides a method of isolating the database access from the rest of the services. not only does this alleviate the security concerns, it helps address the scalability problems. rather than having the database being accessed by hundreds (or thousands!) of nodes, a smaller quantity of database workers can be utilised and load-balanced across. whilst we’re on the subject of databases, as you can imagine the database can grow exponentially when an openstack cloud gets bigger, database archiving in grizzly attempts to address this by flushing old instance data into shadow tables, there’s no need for the table space to continue to spiral out of control with garbage records. the final point i wanted to make around nova is the addition of the evacuate method, this allows administrators to ‘evacuate’ all instances from a particular host, e.g. it needs to be upgraded or it failing, allowing mass migration off said node for maintenance. previously this would have been a lot more difficult to achieve. moving onto networking, quantum (note, now being called openstack networking) provides software defined networking or networking as a service to openstack clouds. it has become widely adopted and is typically the default networking mechanism in grizzly. the prior implementation of networking utilised ‘nova-network’ which provided networking access via l2 bridges with basic l3 and security provided by iptables, it had limited multi-tenancy options (using vlans) and didn’t scale well enough for cloud environments. quantum is the evolution of this, it allows complete network abstraction by virtualising network segments. quantum has received a lot of interest in the community with many vendors providing their own plugins, i.e. quantum provides an abstract api but it relies on plugins to implement the networks. there have been many interesting developments in quantum for the grizzly release cycle, one of the problems initially was the single point-of-failure architecture; an example being a single l3 agent or a single dhcp agent, obviously losing this node would mean a lack of external routing or dhcp for the instances. grizzly has implemented multiple agent support for these therefore reducing these bottlenecks. the concept of security groups is not new to openstack, in previous versions of nova-network it allowed us to set inbound firewall rules for our instances (or groups of instances), quantum is fully backwards compatible yet vastly enhances and extends the security group capabilities, allowing inbound as well as outbound regulations. most importantly, it is now able to configure rules on a per-port basis, i.e. for every network adapter attached to an instance, previously it was on a per instances basis. all of the configuration is exposed within horizon, giving end-users the ability to create and control networks and their topology. additionally, quantum is now able to support some higher layer features such as load balancers (lbaas) and vpns, but much of it is still a work in progress. for those of you unfamiliar with keystone, it provides an authentication and authorisation store for openstack, i.e. who’s who and can they do what they’re trying to do. keystone makes use of tokens, they’re provided to a user after authenticating with a username and password combination. it saves passwords from being passed around the cluster and provides an easy way of revoking compromised sessions. one of the limitations with previous versions was not supporting multi-factor authentication, e.g. a password with a string plus a token-code. this feature has now been implemented in keystone for grizzly, vastly enhancing the security of openstack. there’s a brand-new api version (v3.0) available for keystone too, providing additional features such as groups (not to be confused by tenants or projects) which allow select users to be grouped for reasons such as role-based access control. finally, the block-storage element, cinder, has evolved considerably. cinder provides block-storage to instances, typical use cases would be for data persistence or for tiered storage, e.g. ephemeral storage sitting on the hypervisors local disk but more performant persistent storage sitting on a san. cinder started off by providing block device support over iscsi to hypervisor hosts (which in turn presented the volumes as local scsi disks), to bridge the gap between the current implementation and what enterprises want out of openstack, a lot of work has gone into providing fibre channel and fcoe storage support. the project has welcomed many new contributions from hardware and software vendors in the form of drivers to provide storage resource backends, the list of supported storage platforms is significantly bolstered with the latest grizzly release. previous versions of cinder only permitted a single backend device to be used, grizzly supports multiple drivers simultaneously, therefore allowing multiple tiers of storage for the end-users. examples of this could be iscsi-based storage for low-priority workloads and fully-multipathed fc storage for higher-priority workloads; all of which completely abstracted. up until grizzly, backing up block devices was typically handed off to the storage platform and was not of concern to cinder. with the latest code it’s now possible to backup volumes straight into openstack swift (the completely scale-out, fault-tolerant object storage project). this vastly enhances the disaster recovery options for openstack, they’re true volume backups and are implementation agnostic. this write-up wouldn’t be complete without mentioning the two latest project additions to openstack that became incubating components in grizzly. firstly is heat (https://wiki.openstack.org/wiki/heat) which provides an orchestration layer based around compatibility with aws cloudformation templates. it implements basic high availability as well as automatic scaling of applications. secondly, ceilometer (https://wiki.openstack.org/wiki/ceilometer) provides a billing and metering framework, allowing monitoring of instances and what resources they are consuming. i’ve not given either of these projects justice in the previous sentences but i will write individual articles explaining how relevant and important they will become to the success of openstack. let me know if you’ve got any questions. posted in openstack | leave a reply openstack summit 2013 report posted on may 3, 2013 by admin reply note: the views expressed below are my own and do not, in any way, represent the views of my employer. the week before last i attended the openstack summit in portland, oregon. it was a chance for me to get the latest information about one of the most exciting open-source projects for the past few years. i’ve been involved with openstack for quite a few months now, mainly deploying environments and writing documentation to aid in my understanding as well as providing it out for others to consume; one of the problems with openstack is that it’s very difficult to get started. whilst i knew that openstack had an enormous market-hype with the vast majority of isvs and hardware vendors jumping on-board, nothing could prepare me for the overwhelming turnout at the summit; the event had actually sold-out, which for an open source conference is an achievement. the majority of the sessions were actually overflowing, if you didn’t turn up 20-30 minutes beforehand then you had little chance of getting somewhere to sit. it just goes to show the level of interest in this project, people from all over the world and of all career paths attended as they were either actively involved with openstack in some way or knew they had to learn more; the event had a very refreshing buzz about it, people wanted to be there, were passionate about the technology and could really see it going somewhere. the conference, with the exception of the daily keynotes, was split up into multiple tracks, for the active contributors/developers they had the design summit, but for those of us that wanted to gain an insight into the latest and greatest we found ourselves sitting through presentations covering the widest variety of topics. they also provided hands-on workshops all week, something that i found extremely valuable- for example, as i’m not a networking guy i sometimes find myself confused over complex networking and the concept of virtual networks, the hands-on quantum lab enabled me to gain a good understanding of it. even beginners were catered for, there were 101 sessions most days allowing people with very little experience of virtualisation and cloud computing to come away with an understanding of the openstack project and where it is headed. as a red hat employee myself, putting faces to the names of the colleagues i work with daily was a great opportunity, especially given our recent openstack distribution announcement; rdo, our community-supported offering ( http://openstack.redhat.com ). the ability to network with other openstack users (and potential future users) was extremely valuable, receiving feedback about what they wanted to use it for and what features they really wanted to see. in fact, when you look at the attendee list it goes to show the variety of attendees; it seemed to be a mix between the stereotypical linuxcon attendees and vmworld attendees, a very dynamic environment. the keynote sessions were extremely useful, the rackspace keynote being the headline for many was, as expected, really good. the statistic which they keep using is that whilst they’re not reducing the amount of contributions they make, their overall commit percentage continues to decrease; clearly proving the success of the project in the open-source community. it was also good to see that red hat is out on top now with the latest grizzly release and they’re not trying to keep that quiet! rackspace is using openstack in production (no surprises there!) but the way in which they use it is very interesting, they deploy openstack on openstack for providing “private clouds” on-top of the public cloud, eating their own dog-food at every level. they’ve done this with an extremely high level of reliability and api uptime, so when people say that openstack isn’t ready for production, i really do beg to differ. canonical’s keynote by mark shuttleworth was very good too, i think they’re at a set of crossroads, moving from the traditional ‘desktop environment’ to a more strategic cloud play, i’m just not sure they’ve got the resources to actually fulfill what they try and say, despite the fact that openstack has been predominantly written to run on ubuntu (historically, anyway). aside from the openstack “vendors”, organisations such as best buy, bloomberg, comcast and others presented what they use openstack for, how they’ve implemented it at scale and what they’ve learned (and contributed back!), it further goes to show that there’s real interest in many different areas if the use case is right, i.e. scale out, fault tolerant applications. what interests me is that openstack is clearly viewed by many as a threat, you take vmware for example; they are actively contributing to the project to enable their esxi hypervisor as a compute resource for openstack nova, they’re concerned that when people see the benefits of openstack and the fact that it abstracts much of the underlying compute resource, the requirement for esx will drop. vmware are at risk of becoming irrelevant in the long term where applications are written in different ways, i.e. to be more fault tolerant and not requiring hypervisor-oriented technologies such as ha, they have to try and retain a piece of the pie and leverage the existing investments that organisations have made on their technology. in addition, the next-step of the virtualisation piece is virtual-networking; quantum provides a virtual network abstraction service with multiple plugins for software and hardware based networking stacks. vmware is active in this space also, providing their nicira-based nvp plugin for quantum, this is an emerging technology and will likely be the later piece of the puzzle that gets adopted. vmware and canonical have just gone to market with a fully-supported offering, this joint venture provides a complete openstack environment for customers currently using vmware. as vmware didn’t previously have an operating-system, ubuntu is able to step in and plug this gap, a potentially strategic opportunity for both organisations. it’s not just vmware either, the likes of hp, dell, ibm are all jumping on the openstack bandwagon, all with sets of developers contributing upstream as well as to their own product offerings, seeing openstack as a way of making money. many vendors are providing additional components that allow openstack to integrate directly into existing environments, bridging the gap between upstream vanilla openstack and the modern-day datacenter. the vmware/canonical offering is too early to tell whether it will be a success or not, but in my opinion they are doing the right thing, i am a firm believer that any vendor looking to provide an openstack offering should be open about the partnerships they forge and the additional fringe components that they choose to support; because of the vast support model in the openstack trunk it will attract a wide variety of customers with a disprate set of requirements, picking and choosing technologies to support will be extremely important. openstack has opened the doors to a wide variety of new-startups, all providing additional layers or extensions to help integrate and ease the adoption of the product into organisations. examples of these include mirantis and swiftstack. mirantis is a company based out of russia, they, more than anyone, impressed me at the summit. they’ve written a tool called fuel which aims to provide a ground-up management platform for openstack; currently to implement openstack requires quite a lot of time, knowledge and experience, fuel is able to configure an array of the underlying bare-metal technologies including networking and storage as well as to provision and manage entire openstack environments from a web-interface. if i was in a position to acquire a company, mirantis would be at the top of my list right now! swiftstack provide management tools and support for deploying a swift-based object-storage cloud, their presentations were fascinating and for anyone interested in how swift works then they’ve written a free book ( http://www.swiftstack.com/book/ ) which i highly recommend. the exhibition room was full of vendors promoting either their distributions, their consultancy/architecture services or their additional add-on components, very rare for an open-source project… even at linuxcon it’s usually full of the normal open-source companies plus hardware guys, this represented a huge mix. what last week made me realise is that there’s an enormous opportunity for openstack, a lot of the community work has been done for the vendors wishing to pursue an openstack strategy, some vendors being in better positions than others to make it a successful venture; integration with existing enterprise technology will be extremely important for vendors to get right. there’s a lot of overlap between the capabilities of some existing products in the industry, however openstack, in my opinion, is addressing the problem to the next-generation architectures. there are lots of offerings/flavours/distributions out there, the most important thing about openstack-based clouds is interoperability… they must continue to be open, i.e. open api’s, open standards to allow portability between clouds. whilst i fear that some organisations (especially proprietary vendors) are getting involved in openstack because of the hype, it represents a turning point in the way that big corporations think- it’s yet again proving the power of open source and what can be achieved when we work together. long term, there are many areas in which we can improve openstack, there aren’t many organisations out there that are ready to implement an openstack environment unless they go greenfield with it; integration is key but the switch from traditional data centers to a fully software-defined environment is a big step to take and this step will take years to fully embrace. it doesn’t mean that traditional enterprise virtualisation will go away either, there will always be a requirement for legacy applications but the convergence of these technologies will be an intriguing concept to watch. over time i think that openstack will become a lot more than just a set of tools to build cloud environments; we see this with the latest tool sets such as the heat api and nova-baremetal, tools that are implementing features that have been traditionally excluded. it’s an exciting time to be involved with openstack, i look forward to seeing what the future can bring for the platform and making it a success. as i attended about 10 sessions per day (usually 30-40 minutes each), i won’t comment on them all, but some personal favourites to recommend people look into if they’re interested in learning more about what’s coming up in openstack: orchestration of fibre channel in cinder – the default out of the box block storage configuration in openstack is typically iscsi, whilst there’s plenty of additional options now fibre channel support has taken quite some time to make it into the code. the problems are typically around zoning, as far as nova is concerned this is almost irrelevant as all it has to do is attach the underlying disk to the instance in the same fashion as iscsi. this technology brings openstack closer to enterprise adoption. ( http://www.openstack.org/summit/portland-2013/session-videos/presentation/orchestration-of-fibre-channel-technologies-for-private-cloud-deployments ) ceilometer metrics -> metering – so, ceilometer is a new project within openstack, it was introduced in folsom as an incubated project but has made it into grizzly as a full component. it enables organisations to implement a flexible chargeback model on pretty much anything they want to plug it into. it vastly extends the very basic quotas and utilisation that folsom used to provide. there are still some limitations but it’s extremely powerful. ( http://www.openstack.org/summit/portland-2013/session-videos/presentation/ceilometer-from-metering-to-metrics ) openstack ha with mirantis – this talk actually turned into a product demonstration/pitch, but it was the one i was most impressed by. this demonstrates mirantis’ fuel implementation for managing openstack environments from bare-metal. many organisations keep asking “how can we deploy openstack to scale?” or “how can we make openstack highly available?”. mirantis attempts to solve the deployment and high availability problems with their tools and ammusingly they say it’s so easy, even a goat can do it! ( http://www.openstack.org/summit/portland-2013/session-videos/presentation/standup-ha-openstack-with-open-puppet-manifests-in-under-20-minutes-for-goat ) deploying and managing openstack with heat – this talk discusses how the “triple-o” or “openstack-on-openstack” project uses the heat api (and nova-baremetal) to deploy entire openstack environments automatically, blurring the differences between the cloud layer and the physical world. ( http://www.openstack.org/summit/portland-2013/session-videos/presentation/deploying-and-managing-openstack-with-heat ) software defined networking (scaling in the cloud) – one of the things i mentioned earlier was not being a networking guy, these sorts of presentations helped me understand how things fit in, where things were going and how virtual networking was solving real world problems with scale. gone are the days where we provide l2 bridges (+ vlan tagging) to virtual machines, in the world of software defined networking we can remove a lot of underlying complexity and control it all in software. this is an area that will become extremely important in the future. ( http://www.openstack.org/summit/portland-2013/session-videos/presentation/scaling-in-the-cloud-the-hype-and-happenings-of-software-defined-networking ) note, all of the summit videos are freely available online at: http://www.openstack.org/summit/portland-2013/session-videos/ cheers, rhys posted in openstack | leave a reply macbook pro retina with linux (fedora 18) posted on april 24, 2013 by admin 15 i recently purchased a new macbook pro 13″ w/retina (early 2013, macbookpro10,2 ), it’s absolutely stunning and love using it. as i work on a day-to-day basis with linux, i decided to remove macos and deploy fedora 18 on it. i was pleasantly surprised that almost everything works out of the box. i did have to make some modifications once it was installed, mainly for performance but also to get sound and the wireless working. as the installation is just as easy as installing it on a non-apple machine, this guide assumes that you have already done so. if you’re having problems installing, i’d be happy to assist. what currently doesn’t work? well, thunderbolt hotplug currently isn’t supported; mac osx seems to use some form of magic to enable hotplugging and to control the thunderbolt controllers. whilst the linux kernel supports the thunderbolt controllers, it only ever works if the adapter is plugged in before the system is booted, i’ve successfully used a thunderbolt ethernet adapter. i wouldn’t expect thunderbolt hotplug anytime soon either, which is a shame. there’s also currently a bug with the mini-displayport -> vga, although hdmi works out of the box. i’ll work on the vga adapter problem and will update this post if/when it’s working. step one (wireless): the macbook pro ships with a broadcom bcm4331 wireless adapter, there’s an open-source driver available in the kernel ( b43 ) but it requires additional proprietary firmware from broadcom to use it successfully. i had no end of trouble with this driver, it didn’t support 5ghz/n networks and constantly dropped connectivity – completely unreliable. the solution that worked for me and has been rock-solid so far is the proprietary driver direct from broadcom ( wl ), it has open-source code to support the driver and provide a standard interface to the binary driver. broadcom ship this package on their website (http://www.broadcom.com/support/802.11/linux_sta.php) but for reasons unknown to me do not ship the latest code, in addition, via the rpmfusion repositories it’s available as a package for fedora. the guys over at ubuntu managed to get access to the latest packages and i was able to build it successfully on fedora (after reading lots of threads!). the downside is that the compiled module is inserted into each kernel manually after its built and therefore it needs to be built each time there’s a kernel update. i would rather do this and have stable wifi than use the b43 driver though; perhaps in the future either the b43 or wl driver will be stable enough upstream to use without manual compilation. below i’ve outlined the necessary steps for getting the driver working on fedora 18, note that the package comes as a debian package that we’ll extract: $ su - # yum groupinstall "development tools" -y # yum install binutils -y # mkdir ~/bcm4331 && cd ~/bcm4331 # wget http://shared.teratan.net/~rdo/bcm4331/wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb # ar vx wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb # tar -zxvf data.tar.gz # cd usr/src/wireless-bcm43142-6.20.55.19 # wget http://shared.teratan.net/~rdo/bcm4331/wl_cfg80211.c # mv wl_cfg80211.c src/wl/sys/ # make && make install # depmod -a # modprobe -r b43 ssb bcma # modprobe wl # echo "blacklist bcma" >> /etc/modprobe.d/blacklist.conf # echo "blacklist ssb" >> /etc/modprobe.d/blacklist.conf # echo "blacklist b43" >> /etc/modprobe.d/blacklist.conf # echo "wl" >> /etc/modules-load.d/wireless.conf # restorecon -v /etc/modules-load.d/wireless.conf the above commands basically grab the ubuntu package, over-write some minor changes required to build it on a 3.8+ kernel, compile the module for the local system and install it for the current kernel. in addition it also updates the module “database” and inserts the module with associated dependencies (cfg80211, lib80211 and lib80211_crypt_tkip). finally, it blacklists the open-source drivers and enables the ‘ wl ‘ driver to be started on boot up. the system should now report that the wireless adapter is available and that it’s using the correct driver: # lspci -k | grep -n2 4331 | tail -n3 42:03:00.0 network controller: broadcom corporation bcm4331 802.11a/b/g/n (rev 02) 43- subsystem: apple inc. device 010f 44- kernel driver in use: wl # lsmod | grep wl wl 3074693 0 cfg80211 495993 1 wl lib80211 13968 2 wl,lib80211_crypt_tkip at this point i’d recommend rebooting to ensure that the machine comes up successfully and that the drivers are loaded automatically (confirm using the above steps again). you should also not be able to see the ‘ b43 ‘ driver in the list of modules. note : i did have some instabilities when using ‘ iwconfig ‘, i’d recommend that you avoid this tool if you can, ‘ ifconfig’ works without problem. step two (sound/audio): as the device is very much like any other manufacturers equipment, it utilises commodity components. the ivybridge cpu provides the standard intel hd4000 video card and therefore works right out of the box; upstream support has been there for a long time. this is also the same story for a number of other components, the sound card whilst supported requires a slight module modification to get it to work properly. what you may find is that the modules get automatically loaded by fedora but no sound card is visible. the module that we’re using to provide audio is ‘ snd_hda_intel ‘, we need to pass an additional model identifier to it in order to initialise the card: $ su - # echo "options snd_hda_intel model=mbp101" >> /etc/modprobe.d/snd_hda_intel.conf # restorecon -v /etc/modprobe.d/snd_hda_intel.conf note that you will either need to reboot or reload the module in order to get sound working; i recommend the former as it will integrate with the rest of your environment without having to restart all of the required services manually. when the machine has come back up, confirm that the module was loaded correctly with the model identifier: $ cat /sys/module/snd_hda_intel/parameters/model mbp101,(null),(null),(null),(null)...... step three (performance): regardless of the specification you chose to buy, the macbook pro is a very capable machine. there are a number of recommendations that i’d like to make in order to maximise the performance of your system. firstly, any modern machine with plenty of ram will very rarely need to touch swap space but it certainly still is a requirement, e.g. for hibernation or for various types of workloads. that being said, we should still ask linux to avoid using the swap space wherever possible: $ su - # echo "vm.swappiness=1" >> /etc/sysctl.d/performance.conf # echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.d/performance.conf # restorecon -v /etc/sysctl.d/performance.conf next, as we’re using non-rotational disks (i.e. ssd/flash), the ‘ noop ‘ block scheduler provides a number of performance benefits that we can exploit; it’s essentially no scheduling at all, just basic fifo (first in, first out), alternatively the ‘ deadline ‘ scheduler can be used which tries to prioritise reads to enable some sort of read-performance when there is heavy write i/o. my personal preference is ‘ noop ‘ but your workload may require ‘ deadline ‘. you can view the current scheduler algorithm by using the following command: $ cat /sys/block/sda/queue/scheduler noop deadline [cfq] in the above example, you can see that i’m currently using the ‘ cfq ‘ method. it’s easy to make a one-time change to the scheduler by echoing values into that virtual filesystem but to make this persistent we need to add a udev rule: # echo action=="add|change", kernel=="sda", attr{queue/rotational}=="0", attr{queue/scheduler}="noop" >> /etc/udev/rules.d/60-scheduler.rules # restorecon -v /etc/udev/rules.d/60-scheduler.rules note that i have explicity specified the ‘sda’ device here (the internal ssd) and not all sd*, this is because i may want to attach external disks, e.g. via usb, which will be rotational disks and would therefore benefit from the default scheduler. feel free to adjust the above udev rule to specify ‘ deadline ‘ if that’s your preference. finally, as we’re using a solid state disk it makes sense to implement trim/discard to aid with the wearing of the disk; i’m not going to go into too much detail of why we do this but it will prolong the life of the drive. there are a number of changes to make, firstly your fstab needs to be updated to mount your drives with these options: $ su - # vi /etc/fstab depending on your partition layout and whether you use encrypted volumes the next steps may be quite different for you. i will, however, assume that you’re using a default partition layout with a separate /home and / mount points. simply add the parameters ‘discard’ and ‘noatime’ to each mount. for example: (change) /dev/mapper/vol0-rootvol / ext4 defaults 1 1 (to) /dev/mapper/vol0-rootvol / ext4 defaults,discard 1 1 (repeat for /home) ('i' to edit and 'esc -> :wq!' to save and quit) once that’s complete we need to test your changes: # mount -o remount / # mount -o remount /home # mount | egrep '(on /|/home)' /dev/mapper/vol0-rootvol on / type ext4 (rw,relatime,seclabel,discard,data=ordered) /dev/mapper/cryptovol on /home type ext4 (rw,relatime,seclabel,discard,data=ordered) if successful, you should see the discard option listed in your output, yours my vary but ensure it has discard listed. please note: if you want to increase the performance further and are willing to accept a bit of risk, you can mount your volumes with the ‘ noatime ‘ option too; this removes the requirement for file updates for just reads… just bear in mind that applications may break! fedora by default uses ‘ relatime ‘ which is a nice compromise. — additional tweaks coming tomorrow — posted in uncategorized | 15 replies how to fix mouse sensitivity in gnome 3 posted on july 3, 2012 by admin 3 one of my pet hates in gnome 3 on fedora 16/17 (perhaps in other distributions also) is the apparent lack of functionality in the mouse pointer speed/acceleration. the gui seems to have very little influence on the user experience… continue reading → posted in gnome , linux | 3 replies how to enable nested kvm posted on june 26, 2012 by admin 9 if you’ve arrived at this blog post i’d have to assume you’re familiar with what kvm is, but for the benefit of those who are unaware or are just interested in reading more, i’ll give a bit of a background… kernel-based virtual machine (kvm) is a kernel module that was originally developed by an israeli organisation called qumranet to provide native virtualisation technology for linux-based platforms; essentially turning the kernel into a tier-1 hypervisor. it has since been ported to multiple other platforms and architectures other than 32/64-bit x86. it got initially adopted into the upstream linux kernel as of 2.6.20 (back in 2007). continue reading → posted in kvm , linux | 9 replies switching trackpad scroll direction in linux posted on june 7, 2012 by admin 2 if you’re like me and use both a mac and a linux laptop (or perhaps dual-booting on the same hardware) and like the scroll-direction that lion gives you, you probably found it annoying having to switch between the two platforms… going the wrong way and forgetting which way is which on each platform! continue reading → posted in linux , mac | 2 replies “fixing” kernel_task cpu problems in macos 10.7/10.8 posted on june 5, 2012 by admin 139 update (early 2013) : when i wrote this guide it was focusing on lion 10.7, many people have, of course, upgraded to 10.8 and have reported success using the same principles. however, the plist entries have not been added for newer models, e.g. the new macbook air or macbook pro (+retina). therefore, if you follow the guide exactly you may run into problems such as your model identifier not being visible. after diagnosing this with others via email it would appear that the system uses another plist in the directory, therefore removing all of the plists has worked. i cannot comment further or prove this to be the case as i don’t have the available hardware. let me know whether this works for you….. i use a wide variety of operating systems at home, all services are provided by linux, e.g. firewall, routing, file-storage and dlna media. however, i like using a mac too, i have a late-2009 macbook air which i use whilst traveling. despite all of lion’s flaws, i really like using it- full-screen apps, gestures and the new mail.app is really impressive. the specification of this machine really isn’t anything special, the lack of expansion really leaves a lot to be desired but for what i do- it’s plenty. i will certainly be upgrading to the new ivy bridge macbook air when it comes out, perhaps then i’ll have more than 2gb memory and can run vm’s too(!). continue reading → posted in mac | 139 replies fixing vmware pvscsi kernel-update problem [rhel5] posted on april 12, 2012 by admin 10 recently, i worked on a customer problem which involved using the para-virtualisation drivers that vmware ship as part of their guest tools package for linux operating systems. the vmware package provides a number of kernel modules, the most significant being their pvscsi (block-storage) and vmxnet3 (network) allowing enhanced performance in a virtual environment- rather than emulating scsi or an ethernet adaptor such as intel’s e1000- the choice of what to present to the virtual machine can be configured at any time, but if the drivers/modules aren’t available in the guest operating system, the devices cannot be used. continue reading → posted in linux , vmware | tagged rhel | 10 replies archives may 2013 april 2013 july 2012 june 2012 april 2012 meta register log in proudly powered by wordpress

URL analysis for rdoxenham.com


http://www.rdoxenham.com/?p=325
http://www.rdoxenham.com/?p=288#comments
http://www.rdoxenham.com/?p=259#comments
http://www.rdoxenham.com/?cat=10
http://www.rdoxenham.com/?m=201207
http://www.rdoxenham.com/?p=317#comments
http://www.rdoxenham.com/#secondary
http://www.rdoxenham.com/?tag=rhel
http://www.rdoxenham.com/?page_id=249
http://www.rdoxenham.com/?p=273#comments
http://www.rdoxenham.com/?p=273#more-273
http://www.rdoxenham.com/wp-login.php
http://www.rdoxenham.com/#content
http://www.rdoxenham.com/?p=335#respond
http://www.rdoxenham.com/?p=288

Whois Information


Whois is a protocol that is access to registering information. You can reach when the website was registered, when it will be expire, what is contact details of the site with the following informations. In a nutshell, it includes these informations;

Domain Name: RDOXENHAM.COM
Registry Domain ID: 1573729521_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.enom.com
Registrar URL: http://www.enom.com
Updated Date: 2017-03-04T12:37:35Z
Creation Date: 2009-10-28T01:06:04Z
Registry Expiry Date: 2017-10-28T01:06:04Z
Registrar: eNom, Inc.
Registrar IANA ID: 48
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone:
Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Name Server: NS1.THEWEBHOSTSERVER.COM
Name Server: NS2.THEWEBHOSTSERVER.COM
Name Server: NS3.THEWEBHOSTSERVER.COM
Name Server: NS4.THEWEBHOSTSERVER.COM
DNSSEC: unsigned
URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2017-09-21T09:21:33Z <<<

For more information on Whois status codes, please visit https://icann.org/epp

NOTICE: The expiration date displayed in this record is the date the
registrar's sponsorship of the domain name registration in the registry is
currently set to expire. This date does not necessarily reflect the expiration
date of the domain name registrant's agreement with the sponsoring
registrar. Users may consult the sponsoring registrar's Whois database to
view the registrar's reported date of expiration for this registration.

TERMS OF USE: You are not authorized to access or query our Whois
database through the use of electronic processes that are high-volume and
automated except as reasonably necessary to register domain names or
modify existing registrations; the Data in VeriSign Global Registry
Services' ("VeriSign") Whois database is provided by VeriSign for
information purposes only, and to assist persons in obtaining information
about or related to a domain name registration record. VeriSign does not
guarantee its accuracy. By submitting a Whois query, you agree to abide
by the following terms of use: You agree that you may use this Data only
for lawful purposes and that under no circumstances will you use this Data
to: (1) allow, enable, or otherwise support the transmission of mass
unsolicited, commercial advertising or solicitations via e-mail, telephone,
or facsimile; or (2) enable high volume, automated, electronic processes
that apply to VeriSign (or its computer systems). The compilation,
repackaging, dissemination or other use of this Data is expressly
prohibited without the prior written consent of VeriSign. You agree not to
use electronic processes that are automated and high-volume to access or
query the Whois database except as reasonably necessary to register
domain names or modify existing registrations. VeriSign reserves the right
to restrict your access to the Whois database in its sole discretion to ensure
operational stability. VeriSign may restrict or terminate your access to the
Whois database for failure to abide by these terms of use. VeriSign
reserves the right to modify these terms at any time.

The Registry database contains ONLY .COM, .NET, .EDU domains and
Registrars.

  REGISTRAR eNom, Inc.

SERVERS

  SERVER com.whois-servers.net

  ARGS domain =rdoxenham.com

  PORT 43

  TYPE domain

DOMAIN

  NAME rdoxenham.com

  CHANGED 2017-03-04

  CREATED 2009-10-28

STATUS
clientTransferProhibited https://icann.org/epp#clientTransferProhibited

NSERVER

  NS1.THEWEBHOSTSERVER.COM 178.79.134.103

  NS2.THEWEBHOSTSERVER.COM 139.162.185.132

  NS3.THEWEBHOSTSERVER.COM 178.79.142.26

  NS4.THEWEBHOSTSERVER.COM 173.255.228.168

  REGISTERED yes

Go to top

Mistakes


The following list shows you to spelling mistakes possible of the internet users for the website searched .

  • www.urdoxenham.com
  • www.7rdoxenham.com
  • www.hrdoxenham.com
  • www.krdoxenham.com
  • www.jrdoxenham.com
  • www.irdoxenham.com
  • www.8rdoxenham.com
  • www.yrdoxenham.com
  • www.rdoxenhamebc.com
  • www.rdoxenhamebc.com
  • www.rdoxenham3bc.com
  • www.rdoxenhamwbc.com
  • www.rdoxenhamsbc.com
  • www.rdoxenham#bc.com
  • www.rdoxenhamdbc.com
  • www.rdoxenhamfbc.com
  • www.rdoxenham&bc.com
  • www.rdoxenhamrbc.com
  • www.urlw4ebc.com
  • www.rdoxenham4bc.com
  • www.rdoxenhamc.com
  • www.rdoxenhambc.com
  • www.rdoxenhamvc.com
  • www.rdoxenhamvbc.com
  • www.rdoxenhamvc.com
  • www.rdoxenham c.com
  • www.rdoxenham bc.com
  • www.rdoxenham c.com
  • www.rdoxenhamgc.com
  • www.rdoxenhamgbc.com
  • www.rdoxenhamgc.com
  • www.rdoxenhamjc.com
  • www.rdoxenhamjbc.com
  • www.rdoxenhamjc.com
  • www.rdoxenhamnc.com
  • www.rdoxenhamnbc.com
  • www.rdoxenhamnc.com
  • www.rdoxenhamhc.com
  • www.rdoxenhamhbc.com
  • www.rdoxenhamhc.com
  • www.rdoxenham.com
  • www.rdoxenhamc.com
  • www.rdoxenhamx.com
  • www.rdoxenhamxc.com
  • www.rdoxenhamx.com
  • www.rdoxenhamf.com
  • www.rdoxenhamfc.com
  • www.rdoxenhamf.com
  • www.rdoxenhamv.com
  • www.rdoxenhamvc.com
  • www.rdoxenhamv.com
  • www.rdoxenhamd.com
  • www.rdoxenhamdc.com
  • www.rdoxenhamd.com
  • www.rdoxenhamcb.com
  • www.rdoxenhamcom
  • www.rdoxenham..com
  • www.rdoxenham/com
  • www.rdoxenham/.com
  • www.rdoxenham./com
  • www.rdoxenhamncom
  • www.rdoxenhamn.com
  • www.rdoxenham.ncom
  • www.rdoxenham;com
  • www.rdoxenham;.com
  • www.rdoxenham.;com
  • www.rdoxenhamlcom
  • www.rdoxenhaml.com
  • www.rdoxenham.lcom
  • www.rdoxenham com
  • www.rdoxenham .com
  • www.rdoxenham. com
  • www.rdoxenham,com
  • www.rdoxenham,.com
  • www.rdoxenham.,com
  • www.rdoxenhammcom
  • www.rdoxenhamm.com
  • www.rdoxenham.mcom
  • www.rdoxenham.ccom
  • www.rdoxenham.om
  • www.rdoxenham.ccom
  • www.rdoxenham.xom
  • www.rdoxenham.xcom
  • www.rdoxenham.cxom
  • www.rdoxenham.fom
  • www.rdoxenham.fcom
  • www.rdoxenham.cfom
  • www.rdoxenham.vom
  • www.rdoxenham.vcom
  • www.rdoxenham.cvom
  • www.rdoxenham.dom
  • www.rdoxenham.dcom
  • www.rdoxenham.cdom
  • www.rdoxenhamc.om
  • www.rdoxenham.cm
  • www.rdoxenham.coom
  • www.rdoxenham.cpm
  • www.rdoxenham.cpom
  • www.rdoxenham.copm
  • www.rdoxenham.cim
  • www.rdoxenham.ciom
  • www.rdoxenham.coim
  • www.rdoxenham.ckm
  • www.rdoxenham.ckom
  • www.rdoxenham.cokm
  • www.rdoxenham.clm
  • www.rdoxenham.clom
  • www.rdoxenham.colm
  • www.rdoxenham.c0m
  • www.rdoxenham.c0om
  • www.rdoxenham.co0m
  • www.rdoxenham.c:m
  • www.rdoxenham.c:om
  • www.rdoxenham.co:m
  • www.rdoxenham.c9m
  • www.rdoxenham.c9om
  • www.rdoxenham.co9m
  • www.rdoxenham.ocm
  • www.rdoxenham.co
  • rdoxenham.comm
  • www.rdoxenham.con
  • www.rdoxenham.conm
  • rdoxenham.comn
  • www.rdoxenham.col
  • www.rdoxenham.colm
  • rdoxenham.coml
  • www.rdoxenham.co
  • www.rdoxenham.co m
  • rdoxenham.com
  • www.rdoxenham.cok
  • www.rdoxenham.cokm
  • rdoxenham.comk
  • www.rdoxenham.co,
  • www.rdoxenham.co,m
  • rdoxenham.com,
  • www.rdoxenham.coj
  • www.rdoxenham.cojm
  • rdoxenham.comj
  • www.rdoxenham.cmo
Show All Mistakes Hide All Mistakes