|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.hbase.regionserver.wal.HLog
public class HLog
HLog stores all the edits to the HStore. Its the hbase write-ahead-log implementation. It performs logfile-rolling, so external callers are not aware that the underlying file is being rolled.
There is one HLog per RegionServer. All edits for all Regions carried by a particular RegionServer are entered first in the HLog.
Each HRegion is identified by a unique long int. HRegions do
not need to declare themselves before using the HLog; they simply include
their HRegion-id in the append or
completeCacheFlush calls.
An HLog consists of multiple on-disk files, which have a chronological order. As data is flushed to other (better) on-disk structures, the log becomes obsolete. We can destroy all the log messages for a given HRegion-id up to the most-recent CACHEFLUSH message from that HRegion.
It's only practical to delete entire files. Thus, we delete an entire on-disk file F when all of the messages in F have a log-sequence-id that's older (smaller) than the most-recent CACHEFLUSH message for every HRegion that has a message in F.
Synchronized methods can never execute in parallel. However, between the start of a cache flush and the completion point, appends are allowed but log rolling is not. To prevent log rolling taking place during this period, a separate reentrant lock is used.
To read an HLog, call getReader(org.apache.hadoop.fs.FileSystem,
org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration).
| Nested Class Summary | |
|---|---|
static class |
HLog.Entry
Utility class that lets us keep track of the edit with it's key Only used when splitting logs |
static interface |
HLog.Reader
|
static interface |
HLog.Writer
|
| Field Summary | |
|---|---|
static long |
FIXED_OVERHEAD
|
static byte[] |
METAFAMILY
|
| Constructor Summary | |
|---|---|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf)
Constructor. |
|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
java.util.List<WALObserver> listeners,
boolean failIfLogDirExists,
java.lang.String prefix)
Create an edit log at the given dir location. |
|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
java.util.List<WALObserver> listeners,
java.lang.String prefix)
Create an edit log at the given dir location. |
|
| Method Summary | |
|---|---|
void |
abortCacheFlush()
Abort a cache flush. |
void |
append(HRegionInfo info,
byte[] tableName,
WALEdit edits,
long now)
Append a set of edits to the log. |
void |
append(HRegionInfo regionInfo,
HLogKey logKey,
WALEdit logEdit)
Append an entry to the log. |
void |
append(HRegionInfo regionInfo,
WALEdit logEdit,
long now,
boolean isMetaRegion)
Append an entry to the log. |
void |
close()
Shut down the log. |
void |
closeAndDelete()
Shut down the log and delete the log directory |
void |
completeCacheFlush(byte[] encodedRegionName,
byte[] tableName,
long logSeqId,
boolean isMetaRegion)
Complete the cache flush Protected by cacheFlushLock |
protected org.apache.hadoop.fs.Path |
computeFilename()
This is a convenience method that computes a new filename with a given using the current HLog file-number |
protected org.apache.hadoop.fs.Path |
computeFilename(long filenum)
This is a convenience method that computes a new filename with a given file-number. |
static HLog.Writer |
createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Get a writer for the WAL. |
protected HLog.Writer |
createWriterInstance(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
This method allows subclasses to inject different writers without having to extend other methods like rollWriter(). |
protected void |
doWrite(HRegionInfo info,
HLogKey logKey,
WALEdit logEdit)
|
protected org.apache.hadoop.fs.Path |
getDir()
Get the directory we are making logs in. |
long |
getFilenum()
|
static java.lang.String |
getHLogDirectoryName(HServerInfo info)
Construct the HLog directory name |
static java.lang.String |
getHLogDirectoryName(java.lang.String serverName)
Construct the HLog directory name |
static java.lang.String |
getHLogDirectoryName(java.lang.String serverAddress,
long startCode)
Construct the HLog directory name |
static java.lang.Class<? extends HLogKey> |
getKeyClass(org.apache.hadoop.conf.Configuration conf)
|
static HLog.Reader |
getReader(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Get a reader for the WAL. |
static org.apache.hadoop.fs.Path |
getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
|
long |
getSequenceNumber()
|
static java.util.NavigableSet<org.apache.hadoop.fs.Path> |
getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path regiondir)
Returns sorted set of edit files made by wal-log splitter. |
static long |
getSyncOps()
|
static long |
getSyncTime()
|
static long |
getWriteOps()
|
static long |
getWriteTime()
|
void |
hsync()
|
static boolean |
isMetaFamily(byte[] family)
|
static void |
main(java.lang.String[] args)
Pass one or more log file names and it will either dump out a text version on stdout or split the specified log files. |
protected HLogKey |
makeKey(byte[] regionName,
byte[] tableName,
long seqnum,
long now)
|
static org.apache.hadoop.fs.Path |
moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path edits)
Move aside a bad edits file. |
static HLogKey |
newKey(org.apache.hadoop.conf.Configuration conf)
|
void |
registerWALActionsListener(WALObserver listener)
|
byte[][] |
rollWriter()
Roll the log writer. |
void |
setSequenceNumber(long newvalue)
Called by HRegionServer when it opens a new region to ensure that log sequence numbers are always greater than the latest sequence number of the region being brought on-line. |
long |
startCacheFlush()
By acquiring a log sequence ID, we can allow log messages to continue while we flush the cache. |
void |
sync()
|
boolean |
unregisterWALActionsListener(WALObserver listener)
|
static boolean |
validateHLogFilename(java.lang.String filename)
|
| Methods inherited from class java.lang.Object |
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
| Field Detail |
|---|
public static final byte[] METAFAMILY
public static final long FIXED_OVERHEAD
| Constructor Detail |
|---|
public HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf)
throws java.io.IOException
fs - filesystem handledir - path to where hlogs are storedoldLogDir - path to where hlogs are archivedconf - configuration to use
java.io.IOException
public HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
java.util.List<WALObserver> listeners,
java.lang.String prefix)
throws java.io.IOException
dir location.
You should never have to load an existing log. If there is a log at
startup, it should have already been processed and deleted by the time the
HLog object is started up.
fs - filesystem handledir - path to where hlogs are storedoldLogDir - path to where hlogs are archivedconf - configuration to uselisteners - Listeners on WAL events. Listeners passed here will
be registered before we do anything else; e.g. the
Constructor rollWriter().prefix - should always be hostname and port in distributed env and
it will be URL encoded before being used.
If prefix is null, "hlog" will be used
java.io.IOException
public HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
java.util.List<WALObserver> listeners,
boolean failIfLogDirExists,
java.lang.String prefix)
throws java.io.IOException
dir location.
You should never have to load an existing log. If there is a log at
startup, it should have already been processed and deleted by the time the
HLog object is started up.
fs - filesystem handledir - path to where hlogs are storedoldLogDir - path to where hlogs are archivedconf - configuration to uselisteners - Listeners on WAL events. Listeners passed here will
be registered before we do anything else; e.g. the
Constructor rollWriter().failIfLogDirExists - If true IOException will be thrown if dir already exists.prefix - should always be hostname and port in distributed env and
it will be URL encoded before being used.
If prefix is null, "hlog" will be used
java.io.IOException| Method Detail |
|---|
public static long getWriteOps()
public static long getWriteTime()
public static long getSyncOps()
public static long getSyncTime()
public void registerWALActionsListener(WALObserver listener)
public boolean unregisterWALActionsListener(WALObserver listener)
public long getFilenum()
public void setSequenceNumber(long newvalue)
newvalue - We'll set log edit/sequence number to this value if it
is greater than the current value.public long getSequenceNumber()
public byte[][] rollWriter()
throws FailedLogCloseException,
java.io.IOException
Note that this method cannot be synchronized because it is possible that startCacheFlush runs, obtaining the cacheFlushLock, then this method could start which would obtain the lock on this but block on obtaining the cacheFlushLock and then completeCacheFlush could be called which would wait for the lock on this and consequently never release the cacheFlushLock
HRegionInfo.getEncodedName()
FailedLogCloseException
java.io.IOException
protected HLog.Writer createWriterInstance(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
throws java.io.IOException
fs - path - conf -
java.io.IOException
public static HLog.Reader getReader(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
throws java.io.IOException
fs - path - conf -
java.io.IOException
public static HLog.Writer createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
throws java.io.IOException
path - conf -
java.io.IOExceptionprotected org.apache.hadoop.fs.Path computeFilename()
protected org.apache.hadoop.fs.Path computeFilename(long filenum)
filenum - to use
public void closeAndDelete()
throws java.io.IOException
java.io.IOException
public void close()
throws java.io.IOException
java.io.IOException
public void append(HRegionInfo regionInfo,
WALEdit logEdit,
long now,
boolean isMetaRegion)
throws java.io.IOException
regionInfo - logEdit - now - Time of this edit write.
java.io.IOException
protected HLogKey makeKey(byte[] regionName,
byte[] tableName,
long seqnum,
long now)
now - regionName - tableName -
public void append(HRegionInfo regionInfo,
HLogKey logKey,
WALEdit logEdit)
throws java.io.IOException
regionInfo - logEdit - logKey -
java.io.IOException
public void append(HRegionInfo info,
byte[] tableName,
WALEdit edits,
long now)
throws java.io.IOException
Logs cannot be restarted once closed, or once the HLog process dies. Each time the HLog starts, it must create a new log. This means that other systems should process the log appropriately upon each startup (and prior to initializing HLog). synchronized prevents appends during the completion of a cache flush or for the duration of a log roll.
info - tableName - edits - now -
java.io.IOException
public void sync()
throws java.io.IOException
sync in interface org.apache.hadoop.fs.Syncablejava.io.IOException
public void hsync()
throws java.io.IOException
java.io.IOException
protected void doWrite(HRegionInfo info,
HLogKey logKey,
WALEdit logEdit)
throws java.io.IOException
java.io.IOExceptionpublic long startCacheFlush()
completeCacheFlush(byte[], byte[], long, boolean)
(byte[], byte[], long)}completeCacheFlush(byte[], byte[], long, boolean),
abortCacheFlush()
public void completeCacheFlush(byte[] encodedRegionName,
byte[] tableName,
long logSeqId,
boolean isMetaRegion)
throws java.io.IOException
encodedRegionName - tableName - logSeqId -
java.io.IOExceptionpublic void abortCacheFlush()
public static boolean isMetaFamily(byte[] family)
family -
public static java.lang.Class<? extends HLogKey> getKeyClass(org.apache.hadoop.conf.Configuration conf)
public static HLogKey newKey(org.apache.hadoop.conf.Configuration conf)
throws java.io.IOException
java.io.IOExceptionpublic static java.lang.String getHLogDirectoryName(HServerInfo info)
info - HServerInfo for server
public static java.lang.String getHLogDirectoryName(java.lang.String serverAddress,
long startCode)
serverAddress - startCode -
public static java.lang.String getHLogDirectoryName(java.lang.String serverName)
serverName -
protected org.apache.hadoop.fs.Path getDir()
public static boolean validateHLogFilename(java.lang.String filename)
public static java.util.NavigableSet<org.apache.hadoop.fs.Path> getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path regiondir)
throws java.io.IOException
fs - regiondir -
regiondir as a sorted set.
java.io.IOException
public static org.apache.hadoop.fs.Path moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path edits)
throws java.io.IOException
fs - edits - Edits file to move aside.
java.io.IOExceptionpublic static org.apache.hadoop.fs.Path getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
regiondir - This regions directory in the filesystem.
regiondir
public static void main(java.lang.String[] args)
throws java.io.IOException
stdout or split the specified log files.
args -
java.io.IOException
|
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||