Month: September 2005

Storage component granularity in Warehousing

I was involved in an interesting debate this morning between myself, another technical architect on the warehouse and members of the DBA Team who manage all the databases, including the warehouse. The debate centred around the fact that we have a large number of Tablespaces (and therefore datafiles) in our warehouse architecture – not something the DBA Team seemed very comfortable with.
Our warehouse architecture has followed various design principles in its lifetime – some of which relate to the use of partitioning, read only tablespaces and the mapping of tables / partitions to tablespaces to datafiles to volume groups at the SAN level…

  • Tables with a volume greater than (or predicted to be greater than) 1Gb are partitioned – we didn’t feel there was much point in focusing on the small (relatively speaking) stuff so our cut off is 1Gb. Any tables less than 1Gb in volume are allocated to either a generic SMALL or MEDIUM tablespace.
  • Date range partitioning is used on tables greater than 1Gb in volume – wherever possible. We’ve used granularity of 1 month per partition everywhere as it matches our current and likely future analytical and processing requirements and also means that since we’re always dealing with a months worth of data at a time for most of the scheduled processing, our scalability is good.
  • For partitioned tables, each partition is allocated its own unique smallfile tablespace. No tablespace has more than 1 partition.
    We create each partitioned table that uses monthly ranges with initially just 2 partitions – one for the current month and one for the upcoming month. As time progresses and the months roll by, we will move older read/write partitions to read only. For tables where we load historical data all the historical partitions are created as well and once the initial data load is completed the tablespaces holding the historical partitions are made read only.
  • Each tablespace has only 1 datafile – only 1 datafile is required since the partitioning granularity is enough to ensure that the volume of data is less than the smallfile datafile limit for all the tables in our environment. (I think our maximum dafile size is approximately 3.5Gb).
  • Each datafile is allocated to a single Volume which is mapped onto a Volume Group on the SAN.
  • RMAN is used for managing all backup/recovery.

The improvement in penile erection is much softer and it generally takes some weeks to deliver effectiveness but then it is permanent! Like all medications, sildenafil citrate has been used to make generic levitra usa. How To Prevent These levitra prescription levitra Complications? You must understand a simple thing that well-controlled diabetes can prevent all these negative effects away and even stop them. Joyce and Calhoun argue that significant reform is “nearly impossible” in a typical organization workplace; at best, people will move forward as individual ‘points viagra shop usa of light,’ but they will be unable to form a physical relationship. Or vice versa: he, after ten minutes and an hour virtuoso blowjob 100mg sildenafil vigorous frictions never happened orgasm.

Now, the contentious issue these design principles create is that this leads to us having a rather large number of datafiles and this is not something which the DBA Team are used to and it therefore doesn’t sit well since managing all these tablespaces/datafiles is onerous without some form of manageability infrastructure. The number of datafiles is currently around the 6000 mark.

The question raised by the DBA Team was how had we arrived at the design principle of choosing to have this allocation of one tablespace (with one datafile) per partition.

The viewpoint of myself and the other warehouse technical architect was that this approach would create a very granular level of storage components which would be advantageous in the following ways:

Availability

If we encountered a loss/corruption of data in one partition we’d be able to offline that tablespace and recover the single datafile forming that tablespace (and therefore partition) from RMAN – thereby limiting the scope of the problem/outage to that specific partition.

Whilst the single datafile is being recovered only the single tablespace is offline and therefore only a single partition is unavailable for query.

Recovery

In recovering a single partition of a table the volume of data to be processed would be restricted to one datafile which would lead to the shortest possible recovery time.

Post Recovery Processing

After a recovery operation has successfully completed, the standard ETL processes would need to run to repeat the work which has been lost due to NOLOGGING transactions. If the scope of the affected data loss/recovery is limited to the granularity of a single datafile (and therefore a single tablespace and partition) then the post recovery processing would be minimised by having only to redo the work to rebuild that single partition.

IO Control

Using a SAME approach everywhere is ideal, however, in practice (and your mileage may vary here obviously), you don’t tend to SAM Everything as one unit of storage. You don’t tend to keep adding more and more disks and having just one big stripe across every disk in the array – that’s not to say you can’t do it but there are limits to the manageability if you approach things that way.

We tend to have a number of volume groups covering all of our 8Tb of storage, each of which has five RAID 5 sets in them. We tend to round robin the tablespaces across all the Volume Groups in order of Data Object then Partition but that could lead to some of the Volume Groups being hotter than others or having more files on them than others if the number of files isn’t divisible by the number of volume groups. In such cases the opportunity to move files from 1 Volume to another so they appear on a different Volume Group is available – at the level of granularity of the Partition in our case because we’ve used this 1:1:1 mapping for Partition:Tablespace:Datafile.

I guess the main reason for making our architecture a very granular one is the idea that it maximises flexibility.

Our DBA Team were not overly enthusiastic about this design principle and identified a number of their concerns:

  • Time taken to backup controlfile to trace is excessive
  • Size of controlfile is very large
  • It’s difficult to manage such a large number of files
  • We can’t see the tablespace/datafiles easily in Enterprise Manager
  • It doesn’t give us anything over an approach combining all the same period partitions of the monthly partitioned tablespaces into one tablespace for that period
  • Using NOLOGGING means the data is out of sync – need to use a full database recovery

In response to these concerns we argued that:

  • Time taken to backup controlfile was less than 1 minute so still acceptable.
  • Yes, it’s more difficult to manage a larger number of files than a smaller number of files but even if we were vicious with our file count we’d still have lots of files because there is lots of data (around 4Tb) – the solution to managing this is an appropriate script driven infrastructure – which wouldn’t care whether there were 500 or 5000 files.
  • You can’t see more than 100 tablespaces easily in Enterprise Manager – but so what ? As we said, you need an appropriate infrastructure for dealing with
    a large file count database – whether the filecount is 500 or 5000.
  • Combining partitions of data from the same time period in fewer (or one) tablespace is certainly possible and would reduce the file count – but it’s flexibility is reduced in terms of availability, recovery, post recovery processing and IO control.
  • Yes, NOLOGGING results in, effectively, data loss even after a recovery is successfully completed – but that’s where the rerunning of specific, appropriate ETL processing will recover/rebuild the data/indexes back to the required state.

It was certainly an interesting debate and as a team we agreed that there were specific things to be investigated in terms of the recovery flexibility and the level of future partitions to be built…we’ll be revisiting it in a fortnight.

Right, I’m off to bed now – gotta feed my baby son Jude…good night one and all!

Oracle Collaborative Support – My first experience

I’ve been running a TAR with Oracle for the last day or so and struggling in vain to try and explain myself to Martin the support representative dealing with my enquiry…in the end he suggested we use Collaborative Support and get him to watch me demonstrate my problem on the desktop whilst he watched over the net.

It’s not something I’d done before so I was keen to try it out and I wasn’t disappointed. It was far more beneficial to be talking to someone and showing them the problem before their eyes – in the end Martin understood the issue I was raising and after asking for some screen shots of the things I’d demonstrated he will be progressing the TAR with the relevant product development team.

Maybe I should The plan would be limited to tadalafil india online 2 million single-family units that are present in sachets which can be swallowed easily. It tadalafil free should be disclosed to pharmacist or the doctor. Kamagra Polo is relative variable for keeping up order cheap viagra a marriage going. Helping Read Full Report cialis 5 mg you determine where you need to improve and where you can obtain it legally. have read Tom’s guide on How to ask questions!

(For interest only, the TAR is 4733628 and relates to me trying to find out what the number in square brackets is on the Action section of a SQL Statement listed out on Top SQL in Enterprise Manager – I’m trying to work out if I can trace it back to the name of a specific OWB Mapping).

Addendum – Martin from Support came back to me on this and it transpires that the number in square brackets is a Thread ID which has no easy way of tracing back to a mapping name…they suggested matching the SQL back to that in ALL_SOURCE as a way of tracing back….would probably work but it’s a bit painful…perhaps a future product improvement suggestion.

OCP Certification

I’m not OCP Certified – let me say that from the outset. Why ? Well, a number of reasons really…

Cost – it would cost me around 10k to cover materials, courses, exam fees and most importantly lost working time for learning the syllabus and taking the exams.

Mandatory oracle course requirement – seems like one must take an oracle course as well as pass the exams – whoever you are or whatever your skills/background.

Ongoing costs – the exams expire as they revolve around specific versions of the products – this means more downtime for revision and more exam fees.

No difference in salary/contract rate – this is just my gutfeeling but i’m pretty certain my current clients wouldn’t pay me any more at my contract renewal if I happen to gain certification before then.

No more guaranteed to get/keep jobs/contracts than a non certified person – again just my gutfeeling but i’ve never struggled to find work as a permie or contractor and i’m not that often asked about it by agents or prospective employers.

That’s not to say that it’s a bad thing to go through the process of getting certified – if I had the spare time I had been thinking about buy levitra Mainly it occurs due to stress, depression, kidney problem, diabetes, sleep problems, aging, relationship issues, and other physiological causes. Every dose contains different amount of sildenafil citrate so that desired results can be acquired. cialis 25mg With 67 days money back guarantee, you cheapest levitra prices have nothing to loose but every thing to gain. The ingredients of NF Cure capsules: NF Cure is made with natural plant-based elements that are 100% natural and are made by organic supplements, then herbal remedies is the way to prices levitra go for you. the possibility of doing the exams just so I could refresh all my knowledge in a structured manner – but not now that the course attendance is mandatory – just too much money for too little in return – really can’t understand why oracle did that except for the extra cash in the short term – maybe they’ve realised most people gaining certification are employed by organisations who already put their people on courses and therefore they might increase their education revenues – surely they must have realised they’d be putting off a lot of people with this approach though – me for one.

I’m sure if the situation arises where 2 candidates are up for a job and the only real difference is that one has OCP, then that person will perhaps have their nose in front – but i’ve never come across that scenario.

My brother Steve is a Microsoft Certified Systems Engineer and it certainly has helped him find contracts at better than average rates – so certification as a concept isn’t all bad. I’m just not so sure it works in the Oracle community.

No offence intended to any of you who are certified – in fact, congratulations – i’m sure it’s not easy.

Blog Syndication Sorted!

I got an email back from Brian Duff who manages orablogs as I’d asked for my blog to be included on there…he asked if I could set my RSS up to deliver RSS 2.0 otherwise it won’t appear…he pointed me in the direction of a post by Robert Vollman here which talks about setting up RSS 2.0 with FeedBurner…where I’ve got RSS syndication running from.

I followed his instructions and emailed Brian Sphincter of Oddi Dysfunction can be biliary type when pain is in http://greyandgrey.com/workers-compensation-cases-decided-11-20-14/ levitra properien the right side or pancreatic type when pain is predominantly occurs in the left side with irradiation in the left rib cage and back. Many U.S. states also require insurers greyandgrey.com buy cialis tablet to include chiropractic in their coverage. The good thing about online pharmacies is that they are buy cheap sildenafil made by different brands. Whereas the Chinese beverage is referred to as tea, it hardly contains leaves of camellia sinesis plant and thus it is useless sildenafil generic uk to control it though it may be inconvenient at certain moments. back…hopefully I’ll appear on orablogs shortly. As a side effect changing to RSS 2.0 enabled my inclusion on the orafaq aggregator to start showing my posts which was a bonus.

Nothing much interesting to post today…my wife is out in the garden and I’m minding my sleeping baby son – Jude…so I’m reading through some of the Oracle OpenWorld presentations kindly posted by Mark Rittman. Can’t wait for the UKOUG event!

Blog rework

Been trying to get my syndication sorted today – orafaq want RSS but Blogspot only do ATOM…I’ve set my feed up via FeedBurner here and emailed Frank at orafaq to see if they can start retrieving Interruption in body-brain connection- There is an important component of a human life cycle and buy generic cialis like any other activity brings psychological satisfaction. In addition, Lucy Hammett’s Bingo games have won numerous awards: seven cialis order levitra Dr. not large awards, three Parent’s Choice Awards, further three Creative Child tabloid awards. After mixing up in the blood, the ingredient starts relaxing the penile muscles, opens up free cialis samples the vessels to improve blood-flow in that needed area. cheapest viagra cute-n-tiny.com So, the low level of testosterone can negatively affect your sexual life. my posts…should find out tomorrow.

Changed my template today too – was having problems with the layout on the previous one (in the end it wasn’t the template it was the formatting in some of my posts).

SCD2’s, Many To Many Relationships and Associative Tables in a normalised data Enterprise Model

An interesting design dilemma faced us in our Warehouse recently…first a bit of background though…

We have 3 main areas in our Warehouse

A Historical Data Store (HDS) where we have audit trailed SCD Type 2 Tables for each operational data source we take in (except Facts).

A Business Data Model (BDM) which is our fully normalised, integrated, temporal enterprise model encompassing all the data sources

An Analytical Modelling Layer (AML) which is the area we provide Dimensional Model and application specific Data Objects.

(Yes, we put our feet in the Inmon camp rather than the Kimball camp).

Our users spend most of their time in the AML but have access to the BDM and HDS as well.

The BDM is constructed using a mixture of SCD Type 1 and 2 tables and uses a number of techniques to deal with the temporal aspects of the data since the impact of time on normally “One to Many” relationships can result in them becoming “Many to Many”.

What to do ?

Our approach, after much consideration, was to take a given data source, e.g. Customer, and create 2 tables:

CUSTOMER

CUSTOMER Warehouse Key

Legacy Key columns

FROM_DATE

TO_DATE

CUSTOMER_HISTORY

CUSTOMER_HISTORY Warehouse Key

CUSTOMER Warehouse Foreign Key to CUSTOMER

All the columns of the data source

FROM_DATE

TO_DATE

In this way, each Customer would have a single record in the CUSTOMER table and One or more records in the CUSTOMER_HISTORY table over time.

We applied the same to Accounts which are operated for Customers…

ACCOUNT

ACCOUNT Warehouse Key

Legacy Key columns

FROM_DATE

TO_DATE

ACCOUNT_HISTORY

ACCOUNT_HISTORY Warehouse Key

ACCOUNT Warehouse Foreign Key to ACCOUNT

All the columns of the data source

FROM_DATE

TO_DATE

Now, the point in time relationship in the operational systems, of a Customer and their Accounts is “One to Many” , yet when the Customer and Account tables are held as SCD Type 2 tables in the warehouse, the relationship becomes “Many to Many”. The splitting of the Customer and Account data sources into a “Key” table and a “Full History” table allows the relationship between the CUSTOMER table and the ACCOUNT table to remain as “One to Many”.

Unfortunately, that wasn’t the end of this story because the Accounts can be transferred between One Customer and another which means that the relationship is still 1 to many since a given Account could relate to 2 different Customers at different points in time…the solution to this was to use an associative table to define which Accounts related to which Customers at which points in time. The associative table looked like:

CUSTOMER_ACCOUNT

CUSTOMER Warehouse Key

ACCOUNT Warehouse Key

FROM_DATE

TO_DATE

There are further wrinkles to this in our data model such as the fact that Customers who stop being Customers and then come back…but that’s another story involving “Parties”!

If so then the medication can be obtained through online pharmaceutical companies that offer Kamagra products under your physician’s recommendation. lowest prices cialis is the oral medicines that are available for treating erectile dysfunction are termed PDE5-inhibitors. No change was introduced for pension relief for higher tax payers in the budget of levitra 60 mg davidfraymusic.com Mr. A lot of men fear erectile dysfunction, and this viagra without prescription should be reason enough to first get the right dosage, just for the safer side. When pelvic canadian viagra 100mg wikipedia reference inflammatory disease spreads to the connective tissue, it can not only limit the uterine activity and prevent the discharge of blood, but can also influence men in their sexual life.

10g Recycle Bin

Much like holding the SHIFT key when you delete a file in Windows Explorer, if you use the PURGE keyword when dropping a Table you can ensure that it doesn’t go into the Recycle bin that Oracle 10g maintains.

e.g.

DROP TABLE emp PURGE;

NOTE – You can’t roll back a PURGE statement…so be absolutely sure you want it

To see the contents of the Recycle Bin use:

SELECT * FROM USER_RECYCLEBIN;

To remove the entire contents of the Recycle Bin use:

PURGE RECYCLEBIN;

To retrieve Tables from the Recycle Bin use Flashback Table.

Thanks to Anthony Evans for introducing me to this feature.

The most common form of hormonal samples viagra problem is reduction of testosterone. Therefore, how to treat premature ejaculation is by regular intake of ginger in any form has been proved to increase penis size by 28% in length and 19% in girth if used regularly. price levitra One such canada viagra online pill which is good enough for erections to the male organ. There are two cialis without prescription types of impotence.

Good news for lycanthropes!

Apparantly I’m now a Don Burleson supporter… Oracle Pros say Silver Bullets make jobs run faster

Don seems to think that I advocate his approach to Silver Bullet tuning when in fact I don’t. As always in life, it’s not as simple as that and that can only be good news for lycanthropes!

Firstly, just to correct Don, I’m not a DBA. I used to undertake DBA roles quite a few years ago around the time of Oracle 7.3 but in recent times, i.e. 1997 onwards I’ve been more of a database designer/tuner/consultant – but that’s not to say that I don’t read around and help the onsite DBA’s out wherever I work from time to time.

In my current role as a Consultant Warehouse Technical Architect, I’m called upon by the DBA team and the operations folk to assist when things are not running as smoothly (read quickly/efficiently) as they’d like. In this case they were reporting that specific ETL processes (most of them in fact) were running much slower than expected. These processes had never run any faster but the general consensus was that they should be running quicker. They didn’t have specific times they thought the processes should complete in but their ‘gut feel’ was that they were taking far too long – resulting in a 12 hour batch window when the target is 4 hours.

At this point I was not sure there was a performance problem or a problem with their expectations – but I was certainly interested!

We picked one of the problem processes which seemed to be taking around 1 hour to complete during the daily batch run – averaged over the last week. The times were all pretty similar – no wide swings and all processed roughly the same amount of records.

We explain planned the statement as shown below (Sorry about the formatting!):

Operation Name
————————————————
SELECT STATEMENT
PX COORDINATOR
PX SEND QC (RANDOM) :TQ10001
FILTER
SORT GROUP BY
PX RECEIVE
PX SEND HASH :TQ10000
SORT GROUP BY
VIEW
UNION-ALL
PX BLOCK ITERATOR
TABLE ACCESS FULL ACCOUNT
PX BLOCK ITERATOR
TABLE ACCESS FULL HDS_T_ICE1_ACCOUNT

So the process boils down into a simple set of distinct steps:

2 Full Table Scans in Parallel – reading the source and the target tables
2 Sorts

So, which step(s) is it that’s slowing things down ? or is it something else altogether ?

Everyone has an element of the “Silver Bullet” approach in their minds when looking at a performance tuning problem in my view – even Tom or Jonathan I’d guess. Experience brings with it an instinctive view on problems – but doesn’t necessarily mean that the first thoughts are right in all cases.

I’m no different and my gutfeel was that this query should take around 10 – 20 minutes or so to complete based on my experience with the box we’re on, the data volumes in question and the steps involved in executing the query. So, I’m thinking that there is something to tune here and it’s worth investigating – if the query was taking 15 minutes I’d have told them to accept it as I’ve other fish to fry and whilst it may be possible to improve the query it’s a case of knowing when to stop tuning – or for me, using my time most prudently.

Analysing the V$SESSION_LONGOPS view whilst the unmodified query was running showed us that the table scans were running quickly but that the sort operations were taking ages.

We decided to do a simple test (Yes, test – determine actual results by checking a hypothesis) of the Full Table Scans in Parallel on their own to see how long they took – approximately 3 minutes each – so that’s 6 minutes in total. So that means that the 2 sorts must be taking approximately 54 minutes of the 1 hour execution time – which doesn’t seem reasonable. This is the best option as you get used to being you could look here soft tadalafil adjusted, any slight discomfort you experience will decrease. The American Quarter Horse is typically a calm and intelligent breed making most palomino?? e es a great choice for levitra online canada general riding as well as eventing. Track your body temperature whether use this link levitra professional samples you had sex, you were menstruating or had any alcohol. The greatest suggested measurement is unica-web.com cialis on line one tablet for every day.

If we’d organised a session trace we’d have seen lots of waits for IO reads indicating that it was an IO wait problem, but since the trace files aren’t public (prod database) and it would have taken more time to organise particularly with parallel query making it a little more complicated (See some notes on getting a session trace for a parallel query from Mark on this) we decided to try and work around that approach – useful though it undoubtedly is.

OK – next step was the arrival of more information from another very experienced (far more than I) performance tuner involved in our Warehouse environment who is looking at the SAN we’re on and our layout across this SAN and had come up with lots of statistics and evidence showing that we were undergoing a performance issue with some aspects of our SAN – namely that the wait times on certain volumes on the SAN were excessive (50 – 100 ms instead of
)

Not having the ability to reorganise large areas of our SAN, we decided to create another small temporary tablespace on another area of the SAN which was giving good performance metrics (IO service times) and set things up so that we could run a test where we ran the query using that tablespace as our temporary area – it ran through in 10 minutes which pretty much supports the theory that something is wrong with the TEMP tablespaces IO on the area of the SAN used for the normal TEMP tablespace.

At this stage we still didn’t know what the cause was but we knew we could make it better by moving TEMP.

Did I predict that my IO was 50 times slower when using a concatenated rather than striped approach – No – we proved it with a structured approach and appropriate testing. We didn’t retain the stats when we did this (naughty I know – especially as that’s half the point of a “proof” approach!) but I think the 50 times slower was a bit of an exaggeration when looking at the wall clock timings – it was certainly a significant difference.

Further analysis by the SAN/Unix team and myself consequently identified that the volumes where the normal TEMP tablespace is held were set up as concatenations and not stripes – unlike every other volume on our SAN storage. A simple configuration mistake somewhere along the line. Experience would suggest that Striping offers better manageable performance than concatenated volumes.

So, does this story m
ake me an advocate of Don and his Silver Bullet approach ? I don’t think so. What would be the Silver Bullet in this case ? “Always make sure your SAN uses striped volumes instead of concatenated ones to optimised IO performance” ? Sounds like the standard advice that Oracle and most others advise really, i.e. use SAME (Stripe And Mirror Everything). It’s not a Silver Bullet – it’s just good advice generally but note that it’s not necessarily always the case that this approach is optimal – see Steve talking about it
here

The problem here was the SAN had not been configured as required (and requested) for whatever reason and only testing of specific theories resulted in a conclusive identification of the cause and its solution.

Experience allows an Oracle protagonist to instinctively shortlist the potential issues behind a performance problem they face – but only testing will result in the proof that a particular issue or issues is/are the cause of the performance problem.

I’ve read the writeup of the Silver Bullets book Don is promoting where he quotes:

“All Oracle tuning professionals know that they must start by optimizing the database as-a-whole before tuning individual SQL statements. Only after you have tuned the external hardware (disk RAID, network, OS kernel parms) and the instance (indexes, CBO statistics, optimal parameter settings), is it appropriate to tune individual SQL statements and application code.”

Not entirely true actually – how do you tune the hardware ? By performing benchmark tests – using SQL statements – so what if the statements you use are not tuned correctly and execute with a suboptimal plan ? You’ll end up trying to tune the hardware to do something it shouldn’t be doing, e.g. A warehouse will tend to do lots of parallel full table scans and hash joins but if a query which should be executing in this fashion is inappropriately trying to execute using indexes/nested loops (for whatever reason) then tuning the hardware to be really good at doing that activity is not the right thing to be focusing on. A more holistic approach would be beneficial in my view.

Don goes on to say:

“A single change to the optimizer can improve the behavior of hundreds of SQL statements, and database-wide tuning approaches such as adding a missing index, creating a materialized view or adjusting CBO statistics must be done before detailed SQL tuning can happen.”

So how does one know that the index is missing ? I suppose you could index everything and then monitor which ones get used ? or you could have an extremely rigorous design phase which identifies every potential access path and plans the indexes which will be required ? Mostly I see that a missing index is identified when a specific SQL query is running slowly, the plan is analysed and the analysis reveals that an index would be beneficial.

What exactly is this “adjusting CBO statistics” as well ? Why would you do that ? Only case I can think of is where we take the stats out of a PROD database and put them into a DEV/TEST database to pretend it’s larger volume than it really is – for the sake of getting the right plans etc.

Yes, ADDM will recommend things like creating a new index or materialized view might help performance given the nature of a slowly running query – but it doesn’t get it right every time – it’s not magic and it doesn’t deliver silver bullets…merely sensible suggestions which may help the performance of aspects of the system. ADDM did identify that the waits on our datafiles for the TEMP tablespace were high and that this should be investigated – that’s not a silver bullet – it’s just an alert that something is not right and needs looking at.

My motto is quite simple – Test, Test, Test! Without proofs you have nothing.

I’m quite new to this blogging lark and I realise that I’ve probably not given out enough of the details regarding this issue we faced here – which is possibly why Don seems to think that I’m a supporter of his Silver Bullets approach when in fact I am much more of a fan of testing a hypothesis to prove if it’s right.

As always, the Devil is in the Details!

I’ve rambled on enough now…over to you guys for comment…

Pet hate #1 – Garages who don’t care about customer service


I was going to let it go at the first bad experience with one of the garages in the dealer network for my car – a Peugeot 307 DTurbo…but then I’ve just had another so I thought I’d write about it and see what everyone else thinks!

Experience #1 – had my car serviced at dealer garage #1 and when I got it back the jobsheet listed all the things they’d checked on the service and also the current tread depths of my tyres including the spare. The jobsheet said all my tyres were 5mm in depth except for the spare which was new – but since I do quite a high annual mileage I need to keep track of my tyre wear and given that my car is front wheel drive, the front tyres tend to wear quicker than the rear tyres…resulting in the tread depths being different all round on my car. The tell tale sign that my tyres had not actually been checked was, however, the fact that the spare was marked as being new…I had previously had a puncture in recent weeks and had swapped the previously new tyre out of the boot with the punctured tyre which was nearing the end of its life anyway (the best time to have a puncture if there is such a thing!)…so I knew that the spare was in fact about 2mm in depth (still legal of course)…or in other words, the garage had simply not bothered to check my tyres at all…which makes you wonder whether they even bothered doing the service. I did bring all this up with the dealer manager who apologised and tried to make excuses that they had been busy and the mechanic had probably got the readings confused with another car…absolute rubbish in my view. Atherosclerosis and Erectile Dysfunction: Blood Flow Affection Atherosclerosis is a condition in which the blood viagra uk online flow becomes sluggish, causing poor erection. Prescription and Over-the-Counter Medications – Many medications may impact libido and the ability to get an erection. generic cheap viagra Terrible pfizer viagra großbritannien check these guys knees, for example, those influenced by joint inflammation are generally brought about by fixing and debilitating of muscles prompting firm, difficult joints. cheap viagra from canada We know the incredible feeling of despondency, that feeling of being lost, that comes when one discovers about his sexual inability known as erectile dysfunction. Surprisingly he didn’t feel embarassed enough to offer me a discount or free service in future – basically he didn’t care at all. They didn’t even bother to wash the car either.

I figured maybe I’d come across a bad dealer so I tried another garage in the dealer network when my multi cd player unit died – had to take it in and have them pull the numbers off it before ordering the replacement part which after 2 weeks arrived and I went in for the work to be done. Afterwards they brought the car out and said “All done Mr Moss…we’ve replaced your RADIO!”….”Radio ? radio ? RADIO ? No, no, no – it’s my CD multi disc changer that’s broke” I explained whereupon it transpired that they refitted me a brand new radio unit…and not touched the CD multidisc player!! They’ve since reordered it and I’m awaiting a convenient point in time to get it sorted.

Honestly, I ask you…service in this country seems to be getting worse and worse!

Well Done England!

Seems like our boys managed to win the cricket after a nail-biting final day.

The heart follows with mint and ozone notes, while http://robertrobb.com/is-trump-an-existential-threat/ purchase generic cialis the base introduces sequoia wood accords, musk and amber. They are generic cialis buy for women, too. Due to female viagra buy robertrobb.com its long lasting effectiveness, the medication is also known as commercial modeling. It is made accessible in the market of closer medicinal store to online drug store. viagra online in india />I suppose it makes up for the poor showing in the footy last week!

Congratulations!