Technical Debt and C# Best Practices

by Xavier Comments: 0

  • Do you spend more time coding from scratch or do you spend more time maintaining an existing code base? 

With very few exceptions, we typically spend a lot more time updating, improving, and fixing an existing application, which means that it is highly likely that we are going to run into some technical debt.

Do you know what technical debt is?

Technical debt is a concept in software development that refers to the cost of additional rework caused by choosing a quicker or easier solution when developing a product, rather than using a better approach that would take longer.

This “debt” is incurred because the quicker solution may be less efficient, more error-prone, or harder to maintain in the long run. The idea is that, like financial debt, technical debt can be manageable if it is recognized and addressed early on, but if it is allowed to accumulate, it can become a burden that slows down development and increases the risk of problems in the future.

Technical debt can arise for a variety of reasons. For example, a team may choose to cut corners to meet a tight deadline, or they may be working with limited resources and have to make do with what they have. In some cases, technical debt may be incurred intentionally, as a trade-off to get a product to market more quickly or to test a new feature. However, it is important to carefully consider the long-term costs and benefits of incurring technical debt, and to make a plan to pay it off as soon as possible. Otherwise, it can become a major impediment to the success of a product.

IMHO, it is called techical debt because the individual that wrote the original code OWES me a lot of time that I spent understanding the code to actually beging to works.

Sometimes it even takes me a lot of time to understand the original code that was written by none other than myself!

However, as I became a more experienced developer, I learned how following coding best practices can greatly improve the process of maintaining an application.

Just so that we are on the same page, a best practice is a method or technique that has been proven to be effective in achieving a specific goal or objective. It is a standard or benchmark that is widely accepted and recommended within a particular industry or field as the most effective way to approach a particular task or problem. Best practices are based on the accumulated knowledge and experience of experts and professionals in a given field, and are typically designed to maximize efficiency, effectiveness, and overall quality of outcomes.

For C#, there is a set of best practices that you can find in the C# Coding Conventions page, which in turn were adopted from the .NET Runtime, C# Coding Style.

There are many best practices that developers should follow to ensure that their code is efficient, well-structured, and easy to maintain. 

Here are some of the top C# best practices that every developer should keep in mind:

– Use meaningful names for variables, methods, and classes. Naming conventions are important because they help other developers understand what your code does and how it works. Avoid using abbreviations or single-letter names, and use camelCase for variables and PascalCase for methods and classes.

– Use proper indentation and whitespace to make your code more readable. Proper indentation helps to visually group related code together, making it easier for other developers to follow your logic. Use blank lines to separate blocks of code and add comments to explain what each section of code does.

– Use constants instead of hardcoded values. Constants are values that are defined once and cannot be changed. Using constants helps to make your code more maintainable, as you can easily update the value of a constant in one place instead of having to search through your code for hardcoded values.

– Avoid using null values if possible. Null values can cause NullReferenceExceptions, which can be difficult to debug. Instead, use default values or create your own null object class to handle null values.

– Use exception handling to handle errors in your code. Exceptions allow you to gracefully handle errors and prevent your application from crashing. However, be sure to use exception handling sparingly and only for situations where it is truly necessary.

– Follow the SOLID principles of object-oriented design. The SOLID principles – Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion – help to create maintainable and scalable code.

I have summarized these best practices in my course, C# 10 Best Practices, click on this link to learn more:

By following these best practices, you can ensure that your C# code is of high quality and easy to maintain. Remember to always think about the long-term maintainability of your code, as it will save you time and headaches in the future.

Feel free to drop me a note if you have questions or comments! You can find me on https://twitter.com/xmorera or through my contact form!

This post is part of the C# Advent Calendar 2022 which you can find at https://csadvent.christmas/, courtesy of Matthew D. Groves! Please check the advent calendar for more great blog posts!

T-SQL Tuesday #145: The Pandemic, Costa Rica, and Events

by Xavier Comments: 6

Welcome back to this blog party tradition that has been going strong for years!

I am really happy to be hosting this month and since we are in the middle-ish (hopefully closer to the end) of a pandemic, I would like to ask you the following question:

How much do you love meeting in person, where would you like for your next event to take place, and why Costa Rica?

I am no stranger to in-person events. In fact, I’ve spent a good deal of my life traveling all over the world, teaching technologists from all kinds of companies – big and small – on a wide range of subjects.

Some of the places I have traveled are fantastic for that real-world interaction that we all need.

Yes, remote work is nice and many companies and employees have indeed found out that you can actually work from home in an efficient manner.

However, IMHO, there is no replacement for that feeling of walking into the presentation hall, having the chance to talk to experts and meet new and interesting people that are most likely having the same problems as you, or that are trying to change the world one application/solution at a time.

Now, help me by answering these questions:

  1. Which is your favorite conference and why?
  2. Which is the best venue that you have visited for a tech conference?
  3. Who is the best presenter that you have ever listened to?
  4. Which location would you like for your next event to take place and why Costa Rica?

Let me know what you think!

The Rules:

  • Write your post and publish it on Dec 14 ,2021
  • Include the T-SQL Tuesday logo and link to this post.
  • Ensure you leave a comment on this post with the URL of your post (or a trackback/pingback)
  • Publicize your  post on Twitter/LinkedIn with the #tsql2sday hashtag

Implementing Search Article on MSDN

by Xavier Comments: 0

A couple of years ago I wrote an article for MSDN magazine called Implementing Your Own Enterprise Search.

I was really excited as I started my career as a .NET developer and MSDN magazine was the last word and cutting-edge on .NET at the time.

I remember waiting for each new dead-trees edition magazine to arrive so I could read it cover to cover and learn as much as I can.

Well, mostly thanks to Julie Lerman, I was able to write one article as she pointed me in the right direction to submit an idea.

One thing that’s missing is the source code as MSDN Magazine is no longer maintained so I created a repo for it:

https://github.com/xmorera/implementing-search-solr-msdn-article

Hope it helps!

How Pluralsight Changed My Life Twice

by Xavier Comments: 1

“Skills speak louder than words”

Indeed they do, but the “actions” part is step 2 once you get “skills”.

This is the story of how skills + actions with @pluralsight changed my life TWICE.

Let me tell you why

I’ve been in training since around 2002, when I co-created and delivered the 32-bit to 64-bit migration labs for @Microsoft, @HP, and @Intel while working at Artinsoft.

We were teaching enterprises all over the world how to migrate their code from 32-bit processors to 64-bit. This was part of an initiative led by Microsoft and since Artinsoft created the VB6 to VB.NET migration assistant, we got a chance to help companies worldwide migrate their code.

The effort was called Route 64, and we got a chance to train companies of all sizes while traveling the world (the travel part was fun, check out a list here: https://www.xaviermorera.com/road-warrior/) .

Windows 7: 64-bit or 32-bit? Memory and performance | 4sysops

One of the locations where we gave our trainings was Building 20 in Microsoft’s main headquarters in Washington.

At the time, there was a small company giving trainings in Building 20 as well. Who knows, maybe I even crossed paths with some of its founders back then.

At the time, Pluralsight was instructor-led training–the online part had not been born yet. A few years later, it did and that’s when my life started changing.

I became a subscriber and started working on my skills, which helped me become a better programmer and engineer.

This was life-changing moment #1 for me.

My skills improved and I kept getting better at what I did which also allowed me to help some of my peers one-on-one whenever they are stuck in their work (something I still do and enjoy today).

One day I had an idea, what if the things I am teaching and helping my peers with… what if… I create video trainings that I can then share?

So I sent this mail in their ticketing system.

They replied a month or so later and I started my auditioning process to become a Pluralsight author.

This was life-changing moment #2.

I got accepted and started working nights and weekends creating trainings until I was able to fire myself from my day job and dedicate myself to work on my passion… creating trainings!

https://www.pluralsight.com/authors/xavier-morera

And so Big Data Inc was born… (story to be continued)

Working with Large Files in GIT (LFS)

by Xavier Comments: 0

The other day I casually committed a file and when I pused to git I ran into an error letting me know that I was hitting a limit, I exceeded the allowed file size in Git which is 100 MB.

This issue is quite well documented in several places, including this issue in Github: https://github.com/desktop/desktop/issues/4066

Yeah… who commits a file over 100 MB in size in Git.

Guilty as charged… it was a template (potx) that I needed to apply but that had pleeeeeenty of images.

How do I fix this and commit a large file?

#1 First, I need to “uncommit” the file which is easy since I have not pushed it yet (obviously). so a git reset works

#2 Use LFS, Git’s Large File Storage, which is an open source Git extension for versioning large files.

Git Large File Storage (LFS) replaces large files such as audio samples, videos, datasets, and graphics with text pointers inside Git, while storing the file contents on a remote server like GitHub.com or GitHub Enterprise.

It is easy to use, simply install it first. You can download from https://git-lfs.github.com/

Then you need to install in your account by running (this needs to be done only once per account)

git lfs install

Next up, specify which files you want to track, that is store, in LFS (needs to be done by repo)

git lfs track "*.pptx"

Repeat for each file type that you intend to store in LFS. This information is stored in a file called .gitattributes.

Finally, commit this file so that anyone that pulls the repo also uses LFS (they need to install it too)

Work normally.

For any files that you committed before LFS, you need to migrate them.

Enjoy!

PS: If you want to learn more, I have an entire course on how to use Git with a GUI

https://app.pluralsight.com/library/courses/using-git-with-gui/table-of-contents

Stock Market and Unemployment: Trump vs. Obama According to Artificial Intelligence (Machine Learning)

by Xavier Comments: 0

Disclaimer

Let me start off by saying this: this post is not intended to be a political statement. 

It is not my intention to start a red vs. blue thing. I simply want to look at the data and find out if the claims of “biggest economic growth in history” from Trump stand from the perspective of Artificial Intelligence (AI), using several different Machine Learning algorithms.

The Statement and the Methodology

The last 3 and change years the most repeated phrase has been “biggest economic growth ever” or something of that sort from 45.

But… does this stand true?

Lucky for us, the data is public. It is possible to check the unemployment records in the US as well as the stock market. 

There’s even plenty of charts that try to show the evolution of both unemployment and the stock market in both the Obama and Trump eras.

But, has the US been “on a roll and recovery” during Trump or is he just reaping the benefits of what Obama did?

There are plenty of articles and serious publications that try to show this, including:

US 2020 election: The economy under Trump in six charts from the BBC

– Trump boasts the US economy is the best it’s ever been under his watch. Here are 9 charts showing how it compares to the Obama and Bush presidencies from Business Insider

– The Trump vs. Obama economy — in 16 charts from Washington Post

But what if we take a different approach?

 

A Different Approach with Machine Learning (AI)

Let’s take the data for both unemployment and the stock market and use the Obama era as our training set and the Trump era as our testing set.

That way, we predict if Trump actually helped improve the economy in terms of creating jobs and making the stock market grow.

Here’s a repository with the unemployment and stock market data. Go ahead, create your own Machine Learning model to determine if the US grew economically because of Trump or simply continued from what Obama created.

https://github.com/bigdataincorg/stocks_employment 

If you create a model, go ahead and share your results. Create a pull-request if you want.

Unemployment rate

Obama received an unemployment rate of close to 10% when he started his term, in large part because of the 2008 crisis—the great recession as how some call it. 

In 2016 when he handed the keys to the White House to Trump, the unemployment rate was a tad over 4%. 

The trend for unemployment kept for the next few years, until it got to about 3.5%—that’s pre-covid.

If I ignore Covid and simply use Obama as my training data, then I am able to predict unemployment for the next few years until the pandemic began.

 

The Machine Learning model predicted quite well, but as you can see, unemployment has actually declined not that much since he took office. 

Maybe that last part is a bit harder, I can agree on that, but my point is that it looks like there was no real change, the trend just stayed the same.

 


Referring to the stock market´s pre-covid-19 record highs, the wealthiest ten percent of the shareholders own more than ninety percent of all stocks. And the 2017 tax cut was only beneficial toward corporations and the rich. With middle class income households getting a tax cut of about nine hundred thirty dollars and the top one percent really enjoying a cut of more than fifty thousand dollars. 

The unemployment rate pre-covid-19 was indeed at a half century low but this rate has been falling steadily since 2011, so we can’t really see a difference since he took office. Even though 6.6 million jobs were created under the Trump administration, we shouldn’t forget that Obama inherited an economy during the worst financial crisis since the Great Depression. Therefore, job creation under the Trump administration is merely a continuation of an improving job market and can´t be compared to the turnaround in the early years of the Obama administration. 

Monthly job growth was higher under Obama than in the first two years after Trump took office. Obama added over 1.6 million more jobs in his last three years in office compared to Trump’s first three years. 

Stock Market and Unemployment: Trump vs. Obama According to Artificial Intelligence (Machine Learning)

Regarding the stock market, here is a chart below. In blue we can see the Obama years. The stock market more than doubled during his time. 

It did grow too with Trump; I hear some people say that it is all because of how Obama provided a stable and drama free environment for companies to grow and thrive while other people say that the economy boomed (although I may say “kept booming”) because of Trump’s policies.

Well, my Machine Learning model basically thinks that the stock market kept growing just like the momentum that you gain when someone pushes you on a bike… you keep going because you got the right push.


It is possible that there are two reasons as to why the stock markets have risen. The tax cuts and the federal reserve that is keeping the interest rates low and flooding the markets with cash. And how did the stock market perform under Trump’s leadership when compared to Obama´s?  We are doing our best to keep this a data driven piece and not turn it into a political viewpoint.

And we want to point out again that no one can claim responsibility over the stock market, only slight influences can be observed. 

We can observe a steady climb through most of Trump’s first two years. But the market still did better with Obama as president for his first three years. 

Even though as mentioned frequently before that it is not valid to attribute stock market performance to a person. We can track data over longer time periods and see that US stock performs better under democrats. 

(Forbes reports. From March 4, 1929 through July 5, 2016, U.S. stocks returned an average of 1.71% under Republican administrations and 10.83% under Democratic administrations. While an updated analysis would have the gap narrowing, it would still be significant.)

In essence the stock market was thriving under both presidents. Leaving us to think that capitalism acts regardless of who is politically in charge. 

The stock market was up 46% with Obama compared to the 25% under Trump. Obama ended his presidency with one of the best gains of any president in modern history. When Trump started and introduced his tax cut, stocks were high but they have declined progressively since he started his trade war. 

His 2019 stock market gains are still minimal compared to his predecessors. Stocks grew faster after the reelection of both Clinton and Obama. A 28.6% growth during Trump’s third year fell pale compared with Obama´s 32% when recovering from a financial crisis.

"Economic Boom"

Trump likes to credit himself with bringing about an economic boom the likes of which the world has never seen before. He feels he has launched the great American comeback. But in reality, he didn’t really attain his goal of raising the economy’s growth rate to four percent. There was only a small increase in G.D.P. growth. His tax cut did create growth in 2018 but the effects were severely pushed back by the trade war. We could argue that economic growth did perform slightly better than under Obama but not in comparison with all his recent predecessors. 

Trump especially uses GDP as an example of his success and a major reason for his reelection. It is worthwhile to mention that presidents can’t really take credit for the state of the economy. There are many factors that have an impact on growth that have nothing to do with their policies. 

To be totally accurate, Trump started his presidency in a steady economy, unlike Obama´s time period that was dealing with a serious recession. Trump frequently points out that the US economy is the best it has ever been. This is not the case if we take into account the state of wage growth or business investment. Looking at the GDP growth under Trump, it doesn’t reach his promised 3% mark annually. It is true that the economy has improved under Trump but the recovery began under Obama. 

Again there are a multitude of factors to consider when measuring the state of the economy, therefore it is not actually valid to ascribe the situation to one president. Here’s how the Trump administration’s economic accomplishments actually compare to Obama’s.

If we do want to compare Trump’s GDP growth to Obama´s over a period of three years and look at the numbers it is safe to say that in Trump’s case it is actually slower. Obama´s last three years showed more growth compared to Trump´s. Claiming that he has built the greatest ever US economy before the coronavirus outbreak is not exactly true either. It was doing well but again this started during the Obama administration and also there were periods when it was a lot stronger. The annual average growth was roughly similar during both presidencies. 

Pointing to the tax cuts Trump introduced, they were beneficial to economic growth for about a year but didn’t pay for themselves and created a Federal budget deficit of $1 trillion, something that has never occurred in a non-recession situation. Actually, GDP growth was higher on average under Obama in 2014 and 2015 than compared to Trump in 2017 and 2018. 

Tax cuts

“We lowered our business tax from the highest in the developed world down to one that’s not only competitive, but one of the lower taxes.”
Donald Trump

There are no signs that capital spending and wages are increasing because of the tax cut. The tax cut only boosted the net worth of CEO s and stockholders. And left a debt of about $2.9 trillion. The tax cut did not meet the goal of more investment in new equipment and factories. There was a slight increase in business spending in 2018 but have since declined heavily mainly because of the trade war.

Federal debt

During Obama´s presidency the national debt swelled when trying to rebuild the economy after the financial crisis. But at the end of his term this deficit had significantly declined. Because of Trump’s tax cut and an increase in government spending the annual deficit has considerably gone up again.

Conclusion

We thought it would be interesting to look at Trump’s economic growth claims with a fairly neutral data driven approach. We see that there hasn’t been a significant growth during Trump’s precovid years and that stock markets function fairly independent of who is politically in charge. We were able to lessen the validity of some of Trump’s claims and point out the fallacies in many of his tenures. The data indicate that there were no significant increases and that his wins are merely a continuation of what was put into action before he took office.

To conclude, it is safe to say that the pre-pandemic growth has only followed the trend and the economy wasn’t exactly booming like it had never boomed before, at least that’s what my Machine Learning model says.

A big thank you to Viva Lancsweert and Humberto Barrantes for helping me on researching this topic and creating the nice charts.

The History of Everything Around Big Data

by Xavier Comments: 0

The History of Everything Around Big Data

The tech world changes fast… really fast.

It seems like every time you blink, there is a new framework that gets created or a new language comes along.

In some cases, you can just ignore all these new shiny things… but maybe, just maybe this new framework, language, or service can help make your life easier.

But how do you stay up to date?

That’s where I come in. I will be posting several articles where I go deeper into the world of tech, with a primary focus around everything Big Data.

Some fo the topics that I will cover include getting to know which are the leading Big Data products, their origins, how and when to use them and why do they matter?

And if you are tight on time, then I have other good news for new. Each one of these posts will come with a video so that you can hear about a particular topic while you are at the gym, commuting, or perhaps need something to put you to sleep.

Here’s the list of what we have published and what’s coming in the near future:

Welcome to Big Data TV – Or The One That Started It All 

This is just the intro post, which tells you a bit more of what I am going to be covering next.

Check out the post here or the video here

Here’s what’s coming next:

The Story of Hadoop and Why Should I Care?

by Xavier Comments: 0

You might have heard or seen the term Big Data. The term refers to data sets that are too large or complex to be dealt with through traditional processing applications.

In fact, the information within these data packets is so enormous it can’t be stored or processed on one server. Instead, it might take calls to several devices to retrieve the data. Even then, process time can still be incredibly slow.

Distributed Computing

This is where Hadoop comes in. Developed in 2005 by a pair of Apache software engineers, the platform creates a distributed model to store large data sets within computer clusters. In turn, these clusters work together to execute programs and handle potential issues.

So, how did we get to this point in the world of digital information? Did it appear without notice, or did the concept of large data sets gradually form?

Let’s get into some history on the creation of Big Data and its connections with Hadoop.

Beyond The Information Age

The concept of Big Data goes beyond the Information Age. Individuals and groups have dealt with large amounts of information for centuries.

For instance, John Graunt had to deal with volumes of information during the Bubonic Plague of the 17th century. When he compiled the data into logical groups, he created a set of statistics. Graunt eventually became known as the father of demographics.
Issues with large data occurred after this, as did the development of solutions. In 1881, Herman Hollerith created a tabulating machine that used punch cards to calculate the 1880 Census. In 1927, Fritz Pfleumer invented a procedure to store data on a strip of magnetic tape.
As more data was collected, the means to store and sort it changed. There wasn’t any choice as the information became increasingly complicated. For example, the amount of calculations required by NASA and other space agencies to launch successful programs.
Move Into Popular Culture

However, this didn’t match the accumulation of data collected once computers were made available to the public. It reached enormous sizes when those users learned about the internet. Add smart devices, artificial intelligence, and the Internet of Things (IoT), and “Big” has become exponentially huge.

Consider what is part of this label. Social media is a large piece of it. Credit card companies and other groups that handle Personally Identifiable Information (PII) also produce large amounts of information. Banks and other financial firms create well beyond trillions of data bytes in a single hour.

The Official Term

It wasn’t until 2005 that this process was given the name we know today. It was coined in 2005 by Roger Mougalas, a director of market research at O’Reilly Media. At that time, he referred to it as a set information that was nearly impossible to process with traditional business tools. That includes Relational Database Management Systems (RDBMS) like Oracle.

What could a business or government entity do at that point? Even without excessive information from mobile devices, there was still a large volume of data to compile and analyze. This is where two Apache designers — Doug Cutting and Mike Cafarella — came into play.

Computer Clusters And Large Data

In 2002, these engineers started work on the Apache Nutch product. Their goal was to build a new search engine that could quickly index one billion pages of information. After extensive research, it was determined the creation of Nutch would be too expensive. So, the developers went back to the drawing board.

Over the next two years, the team studied potential resolutions. They discovered two technological white papers that helped. One was on the Google File System (GFS) and the other was on MapReduce. Both discussed ways to handle large data sets as well as index them to avoid slowdowns.

This is when Cutting and Cafarella decided to utilize these two principles and create an open source product that would help everyone index these large data amounts. In 2005, they created the first edition of the product, then realized it needed to be established on computer clusters to properly work. A year later, Cutting moved the Nutch product to Yahoo.

It’s here he got to work. Cutting removed the distributed computing parts of Nutch to create the framework for Hadoop. He got the name from a toy elephant his son owned.

With GFS and MapReduce, cutting created the open source platform to operate on thousands of computer nodes. In 2007, it was successfully tested on 1000 nodes. In 2011, the software was able to sort a Petabyte of data in 17 hours. This is equal to 1000 Terabytes of material. The product became available to everyone that same year.

Of course, this is not the end to the story of solutions needed for the index of large data. Technology continues to change, especially if outside influences make more of us head to our computers. There will come a time when something more powerful will be required than multiple storage nodes.

Until then, we thank those who have already gone through the steps to help all of us retrieve large amounts of data in the quickest and most efficient way possible.

Easy VOIP Calling for a Small Business

by Xavier Comments: 0

Did your company had to transition immediately to WFH?

Are you now in a disadvantage because you had your small business phone system all set up but now it does not work when everyone is at home or it gets quite expensive?

Here is the solution that has worked the best for me over the years, using Skype Manager—which does not involve any setup and has a reduced cost. You just need to install Skype, which runs in a computer in pretty much any OS, tablet, iOS, Android… you name it.

Here’s my business case. I own a support center that provides support to a tech company.

We are in Costa Rica, their customers are in the US and Canada.

When I was asked to implement the phone system I had several options which included setting up an Asterisk central or looking for other solutions.

I had tried Asterisk before but it had several drawbacks.

So what I did is use Skype Manager to invite the collaborators, assigned them a subscription so that they can make calls and, assigned them a land line to receive calls.

You can allocate credit in case they are calling an area not covered by their subscription, you can allocate a Live Chat button if you want to, or Skype Connect in case you need to integrate with an existing SIP-enabled PBX.

You have very good control of how you spend your money.

And there are plans for everywhere.

Each plan even has multiple options tailed to your needs.

The cost savings, easy setup, and control are amazing.

I know, there are newer options like RingCentral or others that provide good functionality.

But this one worked for me, and it gets the job done.

Hope it helps.

Tip of the Day: The Best Screen Capture Tool

by Xavier Comments: 1

One of the things about working from home is that you can’t just pick up your laptop, turn it around, and tell your coworker: “look here, this is what I need”.

If you are remote or distributed, the story is different. You have to share in a particular way. You could start a screen sharing session, but that may be overkill.

Here is where screen capture comes to the rescue.

The “standard” way is to press the print screen key (PrtScr), open mspaint, paste, save, and then send via email or chat.

Well, that did not sound that convenient.

Let me tell you about a lovely tool called Jing that I have been using for many years—although it is now known as Techsmith Capture:

https://www.techsmith.com/jing-tool.html

The lovely thing about this tool is that it is pretty easy to use. One nice feature of Jing is that it puts a small sun in the corner of your screen, so you can hover over it and it expands showing the available options.

You then select which part of the screen you want to capture, and then it gives you the option to add text, arrows, and more.

Then you can save locally, to your clipboard, or upload to TechSmith servers and it gives you a URL.

This last one is quite nice as you can share immediately.

By far Jing is the best tool that I’ve used for the last 10 years. Hopefully the transition to Techsmith Capture won’t let me down.

Oh and it works with images as well as short videos too!

What are you waiting for, download the tool and share away.