Friday, August 09, 2013

The Hunger Games


Finally finished the Hunger Games Trilogy, though admittedly I speed-read through the last book.

I thought the first book was great. Even though it was basically a teenage love triangle story, the backdrop of a dystopian future with televised death-matches was pretty brilliant. "The Hunger Games" successfully satirizes both ancient Rome and modern-day America, all while engaging the reader in a fast-paced thriller. You have to suspend logic every now and again, but no big deal.

The second book, "Catching Fire", was basically a re-tread of the first. There are some new characters, and it's only a matter of time before everybody ends up back in the arena. The love triangle was getting a bit tiresome already, but the death-match action was pretty fun.

The third book, "Mockinjay" is a horrible mess. The plot just languishes for pages on end while Katniss obsesses about irrelevant things. I think the main problem is that once she enters the world of a rebel military action, she is really just a spectator to the events that unfold all around her. Since her narration no longer serves the purpose of advancing the plot, her continuing agony over who to take to prom (team Gale or Peeta) just feels tedious. Also, the details of the military action are spectacularly absurd.

That said, Suzanne Collins is so rich from selling these three books, she can afford to hire someone to ignore this review so she doesn't have to do it herself :).

Sunday, July 28, 2013

Sorry William Faulkner, But I Preferred The Hunger Games

I recently picked up a copy of Absolom, Absolom!, by William Faulkner from my local library.

Here is the first sentence:
From a little after two o'clock until almost sundown of the long still hot weary dead September afternoon they sat in what Miss Coldfield still called the office because her father had called it that--a dim hot airless room with the blinds all closed and fastened for forty-three summers because when she was a girl sometimes had believed that light and moving air carried heat and that dark was always cooler, and which (as the sun shone fuller and fuller on that side of the house) became latticed with yellow slashes full of dust motes which Quentin thought of as being flecks of the dead old dried paint itself blown inward from the scaling blinds as wind might have blown them.
After reading this gargantuan sentence, I wondered if the author was just trying to establish a mood or if the entire book is like this. It simultaneously introduces two people while meandering through a chain of several topics. Skimming through several random pages I realized there's usually only two or three sentences per page. Also, there are colons, semicolons, and hyphens galore.

Compare that sentence with the first sentence from The Hunger Games:
When I wake up, the other side of the bed is cold.
Or, for that matter, the classic Moby Dick:
Call me Ishmael.
I have finally realized that my lack of literary sophistication is really related to the author's average sentence length. I remember reading Hemingway's The Old Man and the Sea, and I loved it. Hemingway is well-known for utilizing a short, vigorous sentence structure. Maybe I just have such a bad memory I have trouble concentrating when I cannot remember where the sentence started.

Incidentally, The Hunger Games is a great book. For a book that is purportedly a "teen fiction", it deals with several adult, contemporary themes. Suzanne Collins obviously has a great deal on her mind other than just teenage love interest. I highly recommend reading it.

Sunday, July 21, 2013

How Using Gittip Can Hurt Your Open Source Project

I was searching for information about Python Virtual Environments when I came across this link in the Hitchhiker's Guide to Python. According to Github, this project has 116 contributors. I have read parts of this guide before, and it's very useful.

I read through the virtualenv page, and thought about forking the project to add information based on my experience. That's when I saw the Gittip link. On the left pane, the "Donate" link points directly to Kenneth Reitz's Gittip account.

Now that money is involved, I cannot justify spending my time and energy making contributions to a project that will financially benefit someone else and not me. Sorry, but I am a professional and my time is valuable. If the original author of a project wants to profit financially from it, I respect his decision, but that is exactly the moment that I will demand to be paid for my services.

I can think of a few interesting solutions to this:

1. Create associations for the purposes of tip contributions.
2. On Github, create a price negotiation system for pull requests.

Solution #1 is kind of obvious--just contribute to some group of people rather than just this guy. Somebody still has to be responsible for divvying up the money, but the association can have rules for determining such things.

Solution #2 is more subtle. If I publish a patch in my fork, then submit a pull request, why not request $100 from the project maintainer? If he/she is making $200 in tips per week due to open source contributions, he now can choose whether the patch is worth the money requested. If he/she merges the patch without paying (theft), that's a guaranteed way to prevent any future patches. If, on the other hand, he likes the patch and wants more from the author, he can choose to either accept the dollar value or re-negotiate. As long as both people finally agree, then a positive-sum transaction has occurred and we have a functioning marketplace.

I admit that adding money to open source development takes some of the fun out of it, but as soon as people started making money from Gittip that ship already sailed. The most important thing is to understand human behavior. If you manage a project and you expect to make money from it, you need to pay your developers.

Monday, April 15, 2013

Stop Deconstructing; Start Constructing

A few days ago I watched a fascinating video about how The Lego Group is a sexist corporation that discriminates against women by marketing its popular Lego toy brand exclusively to boys.

http://www.youtube.com/watch?v=CrmRxGLn0Bk

The author is prominent feminist Anita Sarkeesian. I found her YouTube channel after following a link on Reddit to her series on gender stereotypes in video games. Her videos are very well produced, entertaining, and informative. They are so good, it makes me wish I studied audio visual technologies at some point in my career. Anyone aspiring to convey information to a contemporary audience can learn a lot from her.

First, let me take a moment to address the content of her video. Legos are not sexist. The Lego Group is a multi-billion-dollar corporation, and corporations want to make money. If they could make money by marketing and selling Legos to girls, they would. As a matter of fact, they do sell Legos to girls. Anita points this out, but she disapproves of the product line because it looks a little too girly.

So here we arrive at the crux of the matter. The Lego Group obviously spent millions of dollars researching what kind of Legos girls really want, and ultimately arrived at a product line they could actually sell in the real world. Anita shouldn't disapprove of The Lego Group for creating pink and purple Legos, she should disapprove of girls for not liking different kinds of Legos. I suspect this conclusion would be anti-feminist.

All of this is interesting, but what is the point?

On her About Feminist Frequency page, Anita describes herself as a "media critic" who "deconstructs the stereotypes and tropes associated with women in popular culture." That sounds fair enough--I notice lots of objectification of females in popular culture. Women in video games are usually treated pretty poorly, and several women in my recent memory won academy awards for playing prostitutes. So who could have a problem with a young author taking on these vicious stereotypes?

I don't have a criticism so much as a suggestion. For anybody out there who thinks the world is wrong somehow, there are two ways to go about fixing it: 1) deconstruct what is wrong, or 2) construct what is right. Ironically, this essay is an example of #1 that advocates #2.

It is tempting to choose path #1. Communication with a mass audience is easier today than it ever has been. To the aspiring college graduate with a blog and a mission, the attention of the entire world is merely a tweet away. Once you have a few followers, you can get a few gigs lecturing at conferences, universities or TED. Write a few books--even if they don't sell very well you can still get a great job writing and producing for MSNBC.

The harder path is #2. Let me illustrate using an unrelated example. The authors of the Go programming language saw several problems with today's programming languages and software engineering in general. The language authors are some extremely eminent computer scientists; they could have just gone to their blogs and complained about everything that is wrong with C/C++/Java while making a nice living working for Google. Instead, they decided to build a new programming language that fit their view of how things should be done. A few years later, they settled on version 1.0, and many people use Go for real-world projects.

Western civilization is not perfect, but it's pretty good. If there is some aspect of your job, culture, government, school, etc that you do not like, build a better one. Focus the majority of your energy on the places that need the most work. If you are right--in the words of the fictitious Terrence Mann--"people will come".

Stop deconstructing things, and start constructing them.

If you think Legos are sexist, then build a better Lego. Girls everywhere will be grateful.

Sunday, March 17, 2013

PyCon Day Three

All conferences seem long by the time you get to the last day :).

The keynotes this morning started out with some great news from Van Lindberg concerning Python's lawsuit with that company in the UK (owner of python.co.uk) using the word "Python" as a trademark. Lots of nice words about the community and foundation, but enough already, let's get to the Guido keynote.

Unlike last year, Guido decided to dedicate his keynote to a very technical topic. It seems he finally decided to address the schism in the community regarding asynchronous event programming. There are some nice language features and new standard library APIs coming up. I would say more about this, except that much like Guido from 2012, I don't really care about asynchronous event programming.

Spent the next hour wandering around aimlessly through all the exhibits. Lots of interesting stuff here, but I only stopped to talk to a guy from ReadTheDocs.

Spent another half hour wandering through the job fair. I don't think it's an exaggeration to say that at least half of all companies at the job fair had the word "cloud" either in their title or in their business summary. I got into some in-depth conversations with representatives about their actual product. A few years ago, they called most of these services "IT". Now, all generic services involving helping companies with their compute resources is called "cloud". Okay, cool, so now at least I know what the current gold rush in Silicon Valley VC is all about.

I ran into Larry Hastings and decided to wait around to ask him a question about Python byte-code. He and Martin Von Lowis had lots of interesting things to say about byte-code, JVM byte-code, and the topic of byte-code hardware acceleration. Most likely projects like Numba and PyPy are good places to start for this, rather than CPython. Not sure about Numba--it looks more like it can benefit from GPU optimization. Speaking of which, I also had a brief conversation with Travis Oliphant about Numba/LLVM targeting Intel HD Graphics hardware. All they currently support is CUDA, so that needs to change.

After morning session was over, time for lunch. I ran into another Intel employee, and had a really great conversation with some of the other conference attendees. Best random breakfast/lunch conversation so far. Two ladies who did astronomy and Bioinformatics, a Microsoft engineer, two Intel engineers (including me), and someone whose business I forgot.

After lunch I attended the talk by Brett Canon on how Python import works. I can't honestly say I followed every bit of his talk, but he used an interesting presentation methodology. He drew a huge decision diagram of the entire Python import sequence, zoomed all the way out, and then zoomed into different parts of the algorithm to talk about details. I need to try this technique some time.

Lastly was Alex Martelli's talk on "Good Enough is Good Enough". I loved this talk, because Alex brought a lot of background into the common subject that all engineers face about what tradeoffs they should be making when deciding how long to perfect their perfect diamond product before actually shipping something. I need to go back and watch this one again on NextDayVideo, and follow up on some of the extra reading he suggested.

Okay, so that was my Python 2013. Ate some good food, talked with some fellow Pythonistas, listened to some great talks, and just overall had a fantastic time. Totally worth it :).

Saturday, March 16, 2013

PyCon Day Two

Up early, ready to get more swag from my second day of PyCon.

Started the day on a negative note--apparently Jessica McKellar had a very pressing personal/family matter to attend to, and was unable to give her keynote address. This was really disappointing, because not only do I think Python outreach is a great topic, but I really enjoyed her talk yesterday. Bummer.

The second keynote was Raymond Hettinger's "Python is Awesome" talk. Unfortunately, I have already seen a version of this talk on YouTube. Yes, I think Python is awesome. I am going to wager that pretty much everybody who attended the 2013 Python Conference also thinks Python is awesome. So to just repeater "Python is awesome" over and over again was a bit of a let-down.

Fortunately, I had better luck with interesting talks today.

The first talk was given by Glyph on the subject of event-driven programming. Interesting topic, but Glyph threw so much code at me I had a hard time following it. Probably better for the experienced crowd.

Next was a controversial topic: "Cython vs SWIG, Fight!" Mark Kohler gave a very informative talk on some of the implementation details of these two C/Python interface tools. He must have had at least fifty slides, so the pace of slide flipping made it hard to follow as well. There was a vote at the end of the presentation, and I think Cython won.

Next I walked over to David Mertz's talk on Python3 for text processing. This was really interesting, as Unicode just seems like arcane knowledge to me.

After lunch was a solid block of awesome Python scientific computing stuff. Best part of the conference so far.

Matt Davis did an awesome presentation on capabilities of IPython notebook for education, using IPython notebook :). I use IPython almost every day, but I will definitely find a reason to use that notebook soon.

Later, Luke Lee (an engineer for an energy company) gave a talk about building scientific programs using various Python/numfocus packages. Interesting stuff, and he wrote a sort of "lab" to follow up with.

This was followed by Travis Oliphant's talk about a really exciting compiler tool called "numba". The github page describes this project as "NumPy aware dynamic Python compiler using LLVM". This strikes me as being a killer feature for all the high-speed computing speed freaks out there :). The productivity of Python with numpy combined with the speed of C. There are lots of things like this popping up lately, and I think it's a fantastic trend.

Following a short break, I attended a dense, high-speed presentation of Python byte-code by Larry Hastings. Now I happen to love this topic. Being a hardware guy, I once thought it would be an awesome idea to implement a hardware co-processor that could execute CPython byte-code natively. I left the talk knowing much more about byte-code than when I arrived, but also somewhat depressed about the Python byte-code itself. My take on it so far is that the VM implements things such as integer addition without even dealing with the details of the width of the machine word. None of the byte-code opcodes have integer widths. So in order to execute this code, the runtime needs to implement all those details. Now that's fine for the CPython runtime, but strikes me as a very difficult design constraint for dedicated hardware.

Before taking off, I walked over to Dough Hellman's talk on dynamic code patterns, but it was so full I just decided to observe the talk on "Lessons Learned in Teaching Python". I actually feel bad for anybody asked to present last in a conference, because by 5pm everybody is already brain-dead.

All right, I've made it past the mid-point and I'm ready for tomorrow.

Friday, March 15, 2013

PyCon Day One

Starting out this morning, I learned how to use the San Jose light rail system. It's actually very convenient and easy.

Breakfast was eggs, bacon, muffins, orange juice, etc. You know, the usual.

Got a swag bag full of interesting stuff. Best gift BY FAR is the free Raspberry Pi. Now this will be a cool toy to play with!

During the opening presentation, Jesse Noller gave a very touching talk about all the progress Python has made as a community. This is probably my favorite aspect of Python. The community is very open, inclusive, diverse, and nice. The programming language is used by tech companies such as Google and NASA, and yet it is also being used for educating children. Now that's a great accomplishment! Imagine if you could use the same language for grade school math through post-doctoral research level math. That's basically what Python does for programming.

Keynote was given by one of the co-founders of Raspberry Pi. Funny and informative talk. All of this "Pi Day" nonsense will not dissuade me from supporting the one, true circle constant: tau.

Okay, onto the talks. First stop was "How the Internet Works". I'm glad I attended this, because before today I thought the Interwebs was composed of tubes or something like that. I can see why Jessical McKellar is doing a keynote this year--she a great presenter. Even though it was basic, this was probably my favorite presentation of the day. Lots of unicorns and rainbows :).

Attended the "Rethinking Errors" by Bruce Eckel. I was sitting in the front row, next to Alex Martelli. My favorite part was during the questions. Martelli stands up to ask a question, and POINTS OUT A BUG in the presenter's demo code. That is hard core, man.

Onwards to "So you want to write an interpreter" by Alex Gaynor. Now this talk was frankly pretty dry. Alex tried to cover too much material, IMO. When you move that fast through such a deep software stack, that's bound to happen.

Lunch was some fajita-ish food--not bad. The line stretched around the exhibit hall.

Brett Cannon's talk on Python 3.3 features was awesome. I picked up a few tidbits from it, and I'm psyched to learn more about how the new "yield from" generator expressions will have something to do with some future asynchronous event framework.

Next, decided to stay in the same room for two sessions on Github data visualization. Lots of interesting information here, but I should have read the program notes ahead of time. The only payoff in terms of actual data visualizations were a few "chord" diagrams. Most of the rest was discussing the tools they're using, and some interesting philosophy.

Titus Brown's talk on "Awesome Big Data Algorithms" mostly went over my head. My main take-away was that I need to Google for skip lists and bloom filters in order to better understand this area.

Lastly, "Write the Docs" by James Bennett was probably my second favorite presentation of the day. The content wasn't all that ground-breaking--software developers need to spend more time writing documentation. It was the presentation that I liked. James took more care with his presentation slides, and injected more humor than most other presenters.

I didn't stay through all the lightning talks. Instead, went back to Milpitas to eat some Chinese/Muslim blend food.

Great day number one. Studying talks for day two. Looking forward to the "All Singing and All Dancing Bytecode". Any talk about byte-code that includes a joke from Fight Club has my vote already :).

Thursday, March 14, 2013

PyCon Day Zero - Opening Reception

Showed up at the convention center around 6:30. The bar had "Blue Moon" beer, so I was pretty stoked. The food was mediocre, and the lines were long, so I mostly just spent time walking around looking at all the booths. Got some sweet water bottles from SurveyMonkey and Plivo. Picked up a few t-shirts too. Met Chad Whitacre of Gittip fame. Saw Travis Oliphant and Dave Beazley, though didn't stop to talk with them. Overheard some conversation in the background about how decorators are cool, yeah but generators are really cool :).

The Python crowd has a CS grad student feel to it. Lots of guys with beards :).

Wednesday, March 13, 2013

My First PyCon

I will be in Santa Clara Friday thru Sunday to attend PyCon 2013, and I am very excited about it. There are so many great talks and keynotes scheduled.

Here is my short take on the schedule so far.

Awesome; definitely attending:
  • So You Want to Write an Interpreter
  • Python Without the GIL
  • Awesome Big Data Algorithms
  • Building Full-Stack Scientific Applications
Mildy interesting:
  • How the Internet Works
  • Visualizing Github, Part 1&2
  • Cython Vs. SWIG. Fight!
  • Python for Humans
Meh, what the hell, sure let's do it:
  • API Design for Library Authors
  • SQLAlchemy Session
  • Python at Netflix
  • How Import Works
There is plenty of other feel-good filler in there, such as "Namespace in Python", and "Python 3.3 is Better than 2.7".

I'm psyched. Let's do this!

Sunday, February 17, 2013

New DIMACS Parser in PyEDA

The Boolean satisfiability competitions use a file format called "DIMACS", named after the Center for Discrete Mathematics and Computer Science. For several months now I had a really half-assed parser for the CNF format only, but it didn't handle the more sophisticated SAT format that is basically a mini-language for logical expressions of arbitrary depth. Something neat about the SAT format is that it supports not just the conventional NOT, OR, AND functions, but also the XOR and EQUAL functions in the enhanced 'sat', 'satx', 'sate', and 'satex' formats.

I'm not very smart about parsing methodology, but after ripping off a regex-based tokenizer from the Python standard library documentation, and thinking through the stack-based approach I probably first learned as an undergraduate but have since forgotten, it seems to pass some basic test cases. Now I just need to write some unit tests, and document the feature :).

Wednesday, January 30, 2013

AES in 300 lines of SystemVerilog

Yay! I spoke with an Intel open source committee, and they gave me permission to publish my SystemVerilog AES core as open source!

https://github.com/cjdrake/AES

I used Synopsys VCS for verification, and DesignCompiler for synthesis. Considering some of the newer language constructs, I doubt any of the open source tools (Icarus, Veripool) will comprehend the code out of the box.

A few months ago I became obsessed with the AES algorithm, and decided I must have an implementation to play with. After a long weekend, and a few weekday evenings, I figured out a way to capture the algorithm at a very high level so the implementation itself looks almost identical to the algorithm given in the specification.

There are already something like ten implementation of AES available on opencores.org. I will take the Pepsi challenge against any of those cores any day :).

Saturday, December 29, 2012

Why Not Nationalize Law Practices?

In many ways, the practice of medicine is very similar to the practice of law. A person spends many years training to become either a doctor or a lawyer. The work is well-paid, highly specialized, and highly respected. Okay, that last bit about lawyers being respected isn't true; everybody knows lawyers are scum :).

So why is the practice of medicine currently being nationalized under Obamacare, and nobody is calling for the practice of law to also be nationalized?

One of President Obama's talking points when selling nationalized health care was that if an uninsured person gets a serious illness, he may go bankrupt from the cost of treatment. Is it not also true that if an uninsured person is sued, he may also go bankrupt? Even if you win the case, the lawyer fees are certain to set you back a bit.

I suppose there is at least one area where law is nationalized--criminal defense attorneys. The Constitution itself says that you have the right to an attorney. But let's say you are OJ Simpson, accused of the heinous crime of murdering your wife and her lover. Are you going to solicit the services of a public defender? Of course not! Everybody knows that public defenders are crap, so you will hire the best lawyer your money can buy. Your expensive lawyers choose the stupidest jury possible, and dupe them into believing you are innocent.

So everybody knows that public defenders are bulk-rate losers who will land your ass in jail, but they cannot figure out that public medicine will end up being just about as bad...

I say let's nationalize all the lawyers! We will call it Obamalaw, or some other catchy thing. In addition to Medicaire and Medicaid, let's also institute a new federal entitlement--Mediclaw! There are so many slip-and-fall lawsuits that aren't filed each year. Free lawyers for everyone!

Of course, there might be some difficulty with this, considering the vast majority of congressmen and senators are lawyers. Oh yeah, and Obama himself is a lawyer, just like Bush before him, and Clinton before him. I see some kind of a pattern forming here.

Fucking lawyers.

Sunday, November 25, 2012

Political Strategy in the Age of Obama

The Republicans decisively lost the 2012 presidential election. Media outlets and the Romney campaign have offered all kinds of excuses:

  • Obama behaved like Santa Claus, promising everything to everyone
  • Minority constituencies in the USA now outnumber big whitey.
  • Republicans are the party of female-hating chauvinist pigs, who are stupid enough to use expressions involving "legitimate rape".
  • Romney wasn't conservative enough.
And so on, and so forth...

Fortunately for Republicans, they were not completely defeated; they still hold the house by a large majority. Since all legislation begins in the House of Representatives, you can count on another four years of government gridlock. Unfortunately for them, however, while they are squabbling over budget legislation, Obama will steadfastly grow the bureaucracy, and the country will continue its long march towards financial insolvency.

Things are looking pretty dark for the grand old party, but I have a theory on how Republicans can get back into the game: join the Democrat party.

Imagine what would happen if the 20% of America that describes itself as "conservative" were to switch their voter registration from Republican/Independent to Democrat. The newly-registered double-donkey-agents may now attend Democrat events, and discuss conservative ideas with "fellow" Democrats. They may vote in Democrat primaries, potentially nominating candidates who are moderates. Furthermore, they may even participate in party get-out-the-vote efforts, sabotaging as they see fit. If any election fraud happens to take place, maybe somebody can see to it that all the felons, illegal aliens, dead people, and cartoon characters actually vote for Republicans instead of Democrats for once.

Politics is just a game, and the Nash equilibrium of this particular game promises that candidates will rhetorically come to the center on most important issues. The Democrats have been extremely successful at fielding candidates who talk like moderates but act like socialists. Rank and file Democrats are not curious enough to discover the truth about their candidates--they just do what they are told. Unless some new blood enters the Democrat party, we will continue to get a steady stream of hard-core lefty candidates.

According to Sun-Tzu, "all warfare is based on deception". If Republicans want to win the long game, they must pretend to be Democrats.

Monday, October 01, 2012

Ceiling Log Base Two Function

An exceedingly useful function for digital logic applications is the ceiling, logarithm base two of a positive integer. Rather than writing the long-handed version of this function, ceil(log2(N)), I will refer to it simply as clog2(N). Let's look at a few examples:

+===+=========+==========+
| N | log2(N) | clog2(N) |
+===+=========+==========+
| 1 | 0       | 0        |
| 2 | 1       | 1        |
| 3 | 1.5850  | 2        |
| 4 | 2       | 2        |
| 5 | 2.3219  | 3        |
| 6 | 2.5850  | 3        |
| 7 | 2.8074  | 3        |
| 8 | 3       | 3        |
| 9 | 3.1699  | 4        |
+===+====================+

At all powers of two, the log2 function is an integer, and therefore its ceiling is the same integer. This follows directly from the definition of the logarithm base two: log2(N) = x <=> 2^x = N.

So why is this a useful function? clog2(N) tells you the dimension of a Boolean space, where N=1 is the NULL space (zero dimensions).

For example, three binary digits can represent the following eight points in {0, 1}^3:

+===+========+========+
| 0 | 3'b000 | a'b'c' |
| 1 | 3'b001 | a'b'c  |
| 2 | 3'b010 | a'b c' |
| 3 | 3'b011 | a'b c  |
| 4 | 3'b100 | a b'c' |
| 5 | 3'b101 | a b'c  |
| 6 | 3'b110 | a b c' |
| 7 | 3'b111 | a b c  |
+===+========+========+

This all sounds pretty abstract, but it is very useful in computing the number of bits required to uniquely address N locations. Imagine you have a FIFO with six slots. Starting from zero, I would label them 0, 1, 2, 3, 4, 5. The number five in binary is 3'b101, and therefore I need a maximum of three bits to address six slots. The table above verifies that clog2(6) = 3.

This works out fine for all values of N greater than two. For example, a 2-deep FIFO requires one address bit, 4-deep requires two bits, and so on. However, how many address bits do I need for a 1-deep FIFO? The answer is given by clog2(1) = 0. A FIFO that has only one storage location does not actually need an address at all. It would be as if there is only one house on your street--you can give it an address if you want, but it's not required because it is unambiguous which house you are referring to when you give directions.

Lazy engineers tend to ignore this fact for convenience. Most people will tell you that the clog2(N) function answers the question: "how many bits do I need to represent any non-negative integer less than N". This is actually incorrect, because you need one bit (not zero) to represent the number zero (1'b0).

The correct interpretation of clog2(1) = 0, is that it corresponds to the empty set, or NULL. It represents a zero-dimensional space. Asking for the address of the storage slot in a 1-deep FIFO is an absurd question--it doesn't need an address; it is just NULL.